Scott Rettberg, director of the Center for Digital Narrative (CDN), in conversation with Nick Monfort, who is leading the CDN's Computational Narrative System's research node.
SR: Welcome to Off Center, the podcast about digital narrative and algorithmic narrativity. My name is Scott Rettberg, and I’m the Director of the Center for Digital Narrative at the University of Bergen. In this podcast, I’ll have conversations with the researchers at the center, as well as other experts in the field to discuss topics revolving around digital storytelling and its impact on contemporary culture. Did you know that computers were being used to generate stories decades before ChatGPT burst onto the scene? From text adventures to programs that can generate complex and multi layered narratives, the interaction between computers and storytelling goes back to the beginnings of the history of the computer. Today on Off Center, we’ll be talking with Nick Montfort. Welcome, Nick.
NM: Hey, glad to be here.
SR: Nick, you’re a Professor of Digital Media at MIT, and also a professor II at the University of Bergen with the Center for Digital Narrative and leading our Computational Narrative Systems research node. I’m very excited to have you here in town for a month, as you’ll be most years, and we’ve been delighted to have you here for four years at UiB.
NM: Yeah, and I’m looking forward to talking about all the Center for Digital Narrative related topics today.
SR: Great. You have had a very interesting path throughout your career, as a researcher, scholar, historian to a certain degree, critic, poet, and a fiction writer in some projects. How did you develop your interest in digital media and digital narrative?
NM: All right, so one of the things I can talk about there is my practice, doing creative types of work. I really knew early on that I was interested in language, literature and computing. So, there’s some people, some of whom I know, who were writers first and foremost, and then got really into computers. There are some people who were programmers, and either through collaborating with writers or for other reasons, they were like, “hey, I can do something connected to this.” That’s the case with the other arts as well, people in music and in visual art, and so forth. A lot of these people can identify, “there’s this one moment where I was an abstract expressionist, and then I was like, no way, I’ve got to do this computer thing, you know?” But for me, there’s really no moment like that. I think I was writing on my own out of my own interests when I was young. At the same time, I was starting to program in the BASIC programming language, some at school, some at home.
SR: Playing text adventures too, right?
NM: Yeah. The other thing is that I was playing interactive fiction, you would call it playing because it was entertainment software. Nowadays, people look at this, it’s all text and they’re like, “oh, did my computer crash or something?” But at the time, you’d see commercial games, like Infocom’s Zork trilogy and Deadline. All the stuff that that company brought out, which was actually in literary genres. I bought some of those games, as I call them, although they’re also called interactive fiction, from bookstores. They were sold in bookstores at the time, people hadn’t quite figured out where this software was supposed to go, and they were pretty advanced for what you’d see on a home computer, as far as natural language technology, really. But they were also digging into the different possibilities of these literary genres. They had Douglas Adams do a collaboration, actually two collaborations, Hitchhiker’s Guide, and then also Bureaucracy. So, you look at the stuff and you’re like, “well, it’s literary.” But it’s also computational.
SR: And that was this sort of amazing period, three or four years before graphics cards, where interactive fiction were the form of computer game that people engaged with.
NM: There were a lot of factors. Because there were, even in the early days of graphics, you had home computers able to do some interesting stuff graphically. Infocom also had the advantage that they’re able to put out their games across all this wide diversity of computer platforms. So, it was like one of those periods where the biological diversity is so great, because you had like 40 different types of computers at the same time. And Infocom was able to make their text games for all of these. That was some of what I was interested in. Early on, I was interested in interactive fiction, and what the literary possibilities were for this form. So, I did a computer science undergrad degree, and I wrote my undergrad thesis for my liberal arts degree on interactive fiction.
SR: I remember when I first met you, you only had one master’s degree at that point.
NM: I was on my way to being way overeducated.
SR: The first time I met you, you’re carrying around this, at the time even, old-fashioned laptop that had a sticker for Winchesters Nightmare on it, and it was a hardback work of interactive fiction.
NM: Yeah. I brought that to Atlanta for Digital Arts and Culture 1999 I’m pretty sure. And so, I had written some, besides writing this thesis, which really, was the basis for Twisty Little Passages, my first academic monograph. I also wrote some interactive fiction, Winchesters Nightmare, which was not my best or most successful. What people ended up liking a lot was Ad Verbum, which is a game that involves a lot of verbal puzzles. I found very interesting connections between what can be done computationally, and narrative possibilities. But the other thing is that interactive fiction really gave me permission to think about all these sorts of convergences between literature and computing, and the arts and computing even. And so, what I do now is more related to poetry text generation, a lot of it’s not interactive in the same way that interactive fiction is. But I still think that having been involved with that gave me permission to do some of these other projects and was important for the path that I’ve taken.
SR: And you’ve got a really wide-ranging scholarly career, too. One of the other early books that you did, with Noah Wardrip-Fruin, I still think is really important. I had a new PhD student who’s new to the field of electronic literature, digital culture, and she said “Where should I start reading”, and I was like, “Here’s The New Media Reader, read all of it.” And that that was a key anthology of texts from the early history of computing, digital culture, and of digital media, that really played a formative role in getting people to think about this as kind of a field in its own right. And then you’ve done a lot of work with what you might call platform studies.
NM: Sure. With regard to The New Media Reader, I’m glad that our efforts there paid off. The actual selections came down to what we, with the approval of reviewers and MIT Press, wanted to include, but we had an extensive advisory board. Part of the idea was that if you wanted to have this material at the time, which is early days of the Web, there weren’t shadow libraries, and there weren’t the facilities for getting things through official interlibrary loan or journals online. You would probably have to make a course packet if you wanted something like that.
SR: If you could find everything.
NM: Yeah. So, the less grandiose version of it was, let’s just take what people want as a course packet and put it together. We did write some short introductions, Michael Crumpton, who’s the designer, did, I think, great work presenting that. And actually, it’s no longer part of the published book, but I designed the CD-ROM which still works.
SR: I still have a copy.
NM: Yeah. And so, we put some stuff on there, like the video of The Mother of All Demos by Douglas Engelbart, and that’s online at this point.
SR: I think it’s really important to pull these things aside and put them together, and it makes it recognizable. Similar to a project that that we did together, the first Electronic Literature Collection with Kate Hayles and Stephanie Strickland. We co-edited the first anthology of electronic literature and it’s a similar thing where you could get this stuff online, but maybe a year from now you can’t find it anymore.
NM: Putting it in a more fixed form. And that point, it seems funny in this day and age, because you can’t find a computer that has an optical drive. But, but we had it on CD. And that CD was also published as part of Kate Hayles’s book, we got all the authors to agree to license their work under a Creative Commons license. So, if you’re The Open University in the UK or something and you want to put your own server online with those materials and make them available to your students, you can do that.
SR: Yeah, we make a practice every time an Electronic Literature Collection comes out, we install it on every computer in our lab. You don’t need to ask any questions to do that. And that’s another thread of your work, including your creative work that’s been, I think, very important to you is the sort of idea of open access, using free and open-source software, even in your own projects. Making the code available, so people can learn from it, read it, modify it, and play with it.
NM: Yeah. And that’s the case with very small-scale things which have been taken up, thanks to you and Tokyo Garage with Taroko Gorge, but some of the projects like Curveship, which is a more extensive one, have been made available as platforms people can use. And one of the things about it is you sometimes find that a community grows up around something you release as free libre software. Sometimes it doesn’t, but you make it available. And even if it doesn’t happen, that a lot of people want to use this and it becomes a sensation, people are able to repeat your experiments, they’re able to get insight into the creative work that you’ve done using that. So, there’s a lot of benefits to it. And it also suits my own ethos about what software should be, intellectually, in our world, a little bit more E = mc2, and a little bit less like the Transformers movies.
SR: You’re someone who’s always had a foot in both worlds. From the technology and computer science side of investigation you have a PhD from the University of Pennsylvania in computer science, you have a degree in poetry from Boston University, and you have another master’s degree from the Media Lab at MIT. And so, I think one of the things that’s different from some of the people, like I, who came into electronic literature through English training, is that you’ve always had this acute awareness, both of the poetics, but also of the technological apparatus around it, and of the computational thinking involved in creating these works. And I just want to come back to that idea of platform studies that you’ve been involved in and edited a series of books on.
NM: The very first origins of that sort of thinking for me was when I was invited to give a talk at Yale. And this was in the rather early days of game studies. I thought people are really keen on talking about things from the Nintendo Entertainment System on up, but there’s this very successful system in the US, the Atari Video Computer System, VCS, or Atari 2600. The games look primitive, but they were very compelling at the time. And so, I did a talk about Combat. I took this model that Lars Konzack had developed, where there’s different layers that you can look at, for computer games specifically, but I thought all types of digital media, digital art, creative productions in general, and so modified that. He put a cultural layer on top and I thought, no, culture affects everything, including how people write programs, including how people design hardware.
NM: From that point, I started working with Ian Bogost and we developed some ideas around the Atari VCS, around not just about Combat, but that whole platform, and about the platform studies approach as we saw it. So, Ian and I wrote a book called Racing the Beam. And we also started a series with MIT Press, Platform Studies. Racing the Beam incorporates these ideas, it has a methodology to it, we reverse engineered some of the cartridges, we looked at the material history of the system, by analogy to book history, or the material history of texts. We did a few interviews, but it was not a qualitative social science approach that we took, it has some media historical elements to it, it has this type of approach. When we started the platform study series, we really wanted to make it an invitation for people to focus on platforms, not to prescribe that you go get your own platform and do the thing that we did to it, but take your own approach, which fortunately, people have done. We have now 10 books in that series, four that are going to be coming out and they cover in some cases, software platforms, there’s a book on Flash that came out. They cover computers, like the BBC Micro and the Amiga, the Minitel system, a telecommunications terminal in France, and then, of course, a lot of video game systems. But platform studies aren’t only about video games.
SR: I know the Commodore 64 one is in progress.
NM: Yeah, we’re looking forward to four books. The most recent of these is quite interesting, Simon Rowberry’s book on the Kindle. And one of the things about it is that he brings together these two ideas of platform, because when Ian and I talk about platform studies in the context of our series, we’re talking about computational platforms, things that you can program. So, a lot of developers recognize this concept. But if you talk to people in digital media scholarship or internet studies, they will insist that platforms are either media delivery platforms like YouTube and Spotify, or they are social media platforms. Like, whatever value you want to assign to X. That’s all true. I mean, those things are platforms. There are all sorts of platforms.
SR: Political platforms.
NM: Train platforms, oil platforms, right? But we just wanted to focus on the computational side of things. What Simon Rowbury did in the book on the Kindle, which is called, very cleverly, Four Shades of Grey, is that he tries to bring together these two ideas, because the Amazon Kindle is part of Amazon’s media delivery platform. It’s for getting eBooks out to people. It’s also a piece of hardware that is programmed, not by end users, as it happens in this case, but it does have this material and computational specificity. And it’s even got a few aspects of, it’s sort of weird to think of it as a social media platform, but highlighting does bring a little of that in.
SR: Yeah, and it’s tied into Good Reads, and so it is part of an ecology.
NM: That’s one example. The book previous to that, Alex Custodio’s Who Are You? which is about the Gameboy Advance is also excellent, and brings in queer studies approaches, and looks at the afterlives of systems, and the way that there’s hardware modifications, different sorts of commercial companies that are like “let’s give you an improved case design and a better screen” and things like this.
SR: Well, one of the things that I think is so interesting and important about the platform studies series, that I think connects a bit to your creative work. If we think about literature, an important distinguishing factor of electronic literature, and of working with platforms is that the platforms have constraints and have affordances embedded within them. As I think about your work, there’s also an interest in those types of constraints, but also in this kind of history of literary constraints, things like the Oulipo. Maybe we could say a little bit about that connection?
NM: Yeah. One of the things that I’ve been working with and even writing about to some extent recently, is the idea that, in my creative work, I have a platform practice. So not just doing platform studies but engaging with computers for what they are specifically and concretely. I wrote a recent catalogue essay called "Why I Am Not an Algorist," and I think that line of work is quite important in the history of art. But from my own standpoint, it’s not useful to go around imagining in this abstract way, “oh, everything’s a Turing machine.” I’m writing algorithms. In fact, I’m doing work that’s quite specific to particular platforms. And sometimes I do go back, I do work on the Commodore 64, the Apple II, but I do work also on the modern-day web browser, which has its own model of what computing is. It didn’t originate, the Web and the browser, as a way of running programs, but JavaScript was devised, it’s incorporated into HTML. It’s part of what web browsers do. And so, what types of things are the essence of the Web, and the essence of computation coming together? That’s one of the things I’ve been investigating. I’m interested in, nowadays, a little bit more obscure language, Perl, because it has amazing facilities for very concise means of text manipulation.
SR: One of the distinguishing features of your work, I’d say, is a kind of minimalism, for example, you have a series called ppg256 that were programs that were could only be 256 characters long. And yet that could create literary output of some kind. And maybe the idea there is that the elegance of the program, which is a way of computational thinking, of how elegant can the code be, how efficient can it be?
NM: Yeah, there’s different types of concision. You can try to make something occupy the least memory, the source code can be the shortest, it can run the fastest possible, or it can use as little memory as possible. One of the things that really has helped me in thinking about the sort of projects that intrigued me the most is some of the discussion by a critic, Paul Stephens, whose first book The Poetics of Information Overload is about maximalism and excess. And then he has a book from MIT Press more recently, absence of clutter: minimal writing as art and literature. So, he’s looked at both extremes, the overload of information and extreme brevity. One of the things about some of the projects that I do, for instance, I have an even shorter series of Perl poems called Concrete Perl. Instead of being 256 characters long, they’re 32 characters long. And one of those is called All the Names of God, and it’s in reference in part to an Arthur C. Clarke story of the same name, but it’s also a Kabbalistic type of reference. It just generates every permutation of increasing length of letters of the alphabet. It eventually, because it tries to do this for every length all at once, generate them and store them in memory, it will crash on any computer. What I’m trying to explore, using a very, very tiny amount of code; what is this supposedly general-purpose programming language, running on a supposedly general-purpose computer capable of doing, how can I make it do something that’s maximal using something minimal? What are the ways that I can poke and prod it to elicit something interesting?
SR: So, one of the things we’ve talked about, some years past, is if we do go back to the French and Oulipo, the idea of potential literature, and one of the things that I think is very interesting about text generation systems, among other types of electronic literature, is the idea that these are potential literature machines. In other words, that we have a program and it’s going to produce some kind of output. We might know the constraints under the possibility space, but what we get out of it, when we actually run the generators is something that we might not have necessarily anticipated.
NM: Most generally, anytime you have a system that accepts input of any sort and does some type of transformation to that, it could just be filtering that input in some way, it doesn’t have to be an elaborate, deep learning system of some sort. Then you’d say that what you’ve defined, your transforming program or your language filter that you’ve composed is something that can generate this possibility space of literature, rather than being like a single literary work itself. That’s what the Oulipo were interested in, I guess they’re still interested in it. As far as I know, they don’t put a lot of emphasis on literary works themselves, unless they’re record breaking. So, the question of how can we define these possibility spaces? These types of potential literature is what’s most interesting there. And I think it’s an intriguing perspective, because it’s aligned with a lot of computational investigation, computational art. There’s a tremendous amount to say about why the Oulipo’s work in computing didn’t take off to a greater extent. There has been the Alamo [Atelier de Littérature Assistée par la Mathématique et les Ordinateurs / Workshop for Literature Assisted by Mathematics and Computers], but those investigations haven’t had the same sort of cultural impact.
SR: That sort of focus on procedurality, and of the poetics of procedurality, in other words, that the procedures themselves can be read as an artwork, and can be both material and conceptual art.
NM: For instance, when people are writing programs, and they’re like, “I’m going to write an implementation of one of the Oulipo’s ideas”, it’s not something that came out of Alamo. It’s something that came out of the core Oulipo group and its discussions. It is, I think, fascinating that there’s a connection to 21st century Avant Garde in this computer work, some of which is people working on computers explicitly, like the Algorists or the history going back 70 years of text generation, and some of that, which is mathematical and literary investigation and discussion.
SR: You’re leading the Computational Narrative Systems Node of the Center for Digital Narrative, and you’re going to be working with Rafael Pérez y Pérez on this node, as well as a PhD and postdoc researcher, and bringing this perspective into our other investigations of digital narrative. So, what is a computational narrative system?
NM: Well, I actually like to think of the focus, or the way that you study things, rather than making a tight boundary or cut-off and saying this one is a computational narrative system, this one isn’t. So, to take a computational narrative systems approach, the thing that you’d say is that instead of focusing on the interactive, or multimedia or telecommunications facilities of the computer, we want to see how it does its processing, its symbol manipulation. We’re fundamentally concerned with, in this case, how narrative is being modelled or represented in these systems, and how the elements of narrative computationally shown are being represented and manipulated by the computer.
SR: What are a couple of classic examples? I know this is story generation is something that didn’t just arrive yesterday, and certainly didn’t just arrive with ChatGPT, which we can come back to. What are some of the early examples of story generation systems?
NM: There’s a history going back, at least for story generation specifically, to around 1960, like SAGA at MIT. There’s a system that Joseph Grimes developed for creating short standalone stories when he was in Mexico City. There’s a very well-known early system that was developed as part of PhD study, and a dissertation was written about it, James Meehan’s Tale-Spin. Not the first system, it’s from the late 1970s, but an important system, definitely. And then there’s a lot of other projects that have been done some of them with cultural specificities encoded, so that French romances would be generated with an awareness of that you can’t run off with a Christian woman, but you can run off with this other woman even if she’s already married, things like this. There’s actually a research field still active, which nowadays is I’d say is most connected to the computational creativity community, that deals with explicitly creating representations of narrative in computational creativity. Researchers there are most interested in the creative generation of narrative, how can we do things that are novel, that are interesting that are in some ways useful?
SR: There’s two general ways of looking at it. We have one, this is a computer science problem, and there’s a lot of people are investigating it from that perspective–
NM: I would say specifically a cognitive science cognitive problem, because if we wanted to separate that out, we’d say it’s a matter of algorithmic efficiency if it were purely computer science, but the problem as it intersects with cognitive science, is how is it that people think about stories, and how can we build computer representations of that, and models of that?
SR: Right. In the last 10 minutes or a few years, large language models have very quickly become a huge part of our culture. Anyone can walk up to ChatGPT and say, “tell me a story about a girl, grandmother and a wolf”, and it’ll spit something out at you. And I know you have mixed feelings about LLM based AI. There’s a distinction between the kind of work that you’ve done, and Rafael has done, and I know there’s a whole history of diverse approaches to representing stories and computational narrative systems. But what would you say is the main distinction between LLMs that can respond with the elementary narratives and the story generation systems?
NM: Well, so the LLM is, as it says on the tin, a language model, actually John Cayley corrects that and says it’s a text model. It’s not a model of speech, except for the fact that there’s some transcribed speech, presumably, in there that it has been pre trained on. And so, when you look at it, and you want to use an LLM, not ChatGPT, because ChatGPT is an LLM plus, I think of the LLM in a lot of ways as the id of the internet, representing all the collective unconscious of what everyone has. This is a gross approximation, but the ChatGPT adds a super ego. Let’s not do things that are offensive, let’s get people in Kenya to downvote the traumatic texts that come out of it, and so forth.
NM: So, setting that aside for a moment, the LLM itself, when you actually use it to generate text, you’ll have parameters that relate to statistics of language, and those parameters would include temperature, how deterministic, or random, you want the output to be, Top-K, from what range of tokens shall we select? They’re very specific to the statistics of language, when you have a narrative model of the sort that Rafael and I have both built, but also the other things we’re talking about, Tale-Spin, Racter. There’s an amazing model, DAYDREAMER, by Eric Mueller, which is a cognitive model of how people associate in daydreams. The parameters for these models are ones that have to do with cognition, and narrative. I’m not saying the models are all perfect, but the parameters that you have, for instance in my system Curveship, is who is going to be the "I" of the story, who’s going to be the “you” of the story? Who shall we focalize? How should we change the chronological order and the telling of events, which can be different than the order of the events, the underlying story, these types of things that are directly taken from narrative theory and narratology. Those are the main distinctions. For instance, you can also have a Markov chain that ends up telling stories, but it does that as a consequence of the type of language model that it is. A Markov chain is a language model, it’s a very, very simple language model. Assuming that it’s a textual Markov chain, it can generate stories too, but it doesn’t do that according to parameters that you can change.
SR: It’s fascinating and surprising how well these systems can know probabilistic language models. Even though the stories that they create individually, if you hit a button and spin on a story, it’s not going to be a very good story, but that that approach ends up being able to produce the kind of results that it does. At the same time, it is a much more black box form of computation.
NM: In certain ways, lots of things are completely known about it, like backpropagation, the way these systems are trained. It’s not a mystery as to how to do this, which is why there’s a bunch of free software models.
SR: It’s not mystical, but the actual operations, where if you go through the type of storage generation system that you’ve created, you can understand the specific decisions that were made by the program to produce the specific output.
NM: Yes, if we walk it back to the super simple, but related case of a Markov chain, then you can get a system that implements a Markov chain and type in once upon a time and let it go. And it’s going to actually generate something that’s sort of story like. For each word or token that it generates, you can tell exactly why it did this, because of the conditional probability of this, and then everything is clear to you. But the problem is that it’s clear in a way that is useless. From the standpoint of you have to go back, and then you’d have to say, “Okay, well, what is it in the original data?” Let’s say I want to make stories that are politically pointed, and are parables for the contemporary political situation, or stories that are more internalized and darker, or stories that focalize particular care or that upset, that have flashbacks and flash forwards, that upset the chronology. How do I do that? It’s not that we don’t know what’s going on. And in fact, there are things we can do very well because we can vary the temperature, we can basically make a Large Language Model produce language that’s more random and outrageous, or we can have it do something that’s much more–
SR: Like the most likely outcome of asking for a story about a wolf.
NM: In fact, if you turn that temperature all the way down to zero, you’re just going to get the same word, “Wolf, Wolf, Wolf”. You’re going to get something that’s completely deterministic.
SR: This is going to be a fascinating research conversation that we’re going to be having over the course of the next decade or so. And that sort of dialogue between these types of systems and seeing where AI takes us next, and what the advantages are, of different models, both of AI and text generation will be a subject that we’ll have plenty of time to talk about. Before we go though, you’re a rapper?
NM: That’s correct.
SR: This has become a fairly recent avocation of yours.
NM: Not a lifelong practice of mine, but yeah.
SR: Can you say a little bit about what you’re doing with rap, and I know you’re also beginning to think about the connections between computational narrative systems or text generation systems and rap.
NM: Narrative is a few steps down the road, but I have become involved, initially with the Society Of Spoken Art, as someone who’s in resonance with that organization, and then as a member of the organization. The practice of rap that I am particularly intrigued with is that of improvisational responsive rap, the type that you would see in very informal settings, like a cypher in plaza or street corner. Maybe after the actual rap concert, people would come out and start rapping on the street. And so, it’s not the type of thing that you would see highlighted in recorded music. You find this on YouTube, where people are now able to monetize this. They’re able to make money being improvisational, responsive rappers, whether they connect by video or go out in person, but that’s a recent phenomenon. It’s a different type of thinking that relates to orality. It goes back to some of the ways that people thought about narrative and about composition of poetic lines and delivery of poetry in primary oral cultures such as Greece prior to The Iliad and The Odyssey being written down. The difference is that we don’t live in a primary oral culture, we live in a hyper literate culture. The people who are really good at improvisational responsive rap are literally reading the dictionary again and again, in some cases, or they’re certainly keeping up on the news, reading stuff to be able to incorporate that into the bars they deliver. That’s some of my general interest. I have been working on a system recently, Cyber Rapper / Cypher Rapper, or CE/CE, the first component, Cyber Ear / Cypher Ear. It's a component that listens to rap, and produces one evaluation, not the only correct evaluation, but it embodies the value system about what’s good with these particular verses.
SR: Yeah, a rap writer. Okay, well, we got to get going. We’ve been talking with Nick Montfort, Professor of Digital Media at the Massachusetts Institute of Technology and Professor II with the Center for Digital Narrative. Before we go, why don’t you take us out on a bar or two?
NM: Well, if you step to me.
*Nick and Scott Rap*
SR: Thank you, Nick Montfort.
Listen to the full episode of Off Center.
Subscribe to the CDN newsletter.
References
Chandler, Otis and Elizabeth K. Chandler, 2006. Goodreads. Social network site. https://www.goodreads.com/.
Custodio, Alex. 2020. Who Are You? Nintendo’s Game Boy Advance Platform. MIT Press.
Hayles, N. Katherine, Nick Montfort, Scott Rettberg and Stephanie Strickland. 2006. Electronic Literature Collection: Volume 1. https://collection.eliterature.org/1/.
Infocom. Bureaucracy. Infocom. Various platforms. 1987.
Infocom. Deadline. Infocom. Various platforms. 1982.
Infocom. The Hitchhiker’s guide to the Galaxy. Infocom. Various platforms. 1984.
Infocom. Zork. Infocom. PDP-10. 1977.
Meehan, James. 1976. Tale-Spin. A program that writes simple stories.
Montfort, Nick. 1999. Winchester’s Nightmare: A Novel Machine. Interactive fiction. Web edition: https://nickm.com/if/parchment/index.html?story=stories/winchest.z8.js.
Montfort, Nick. 2005. Twisty Little Passages: An Approach to Interactive Fiction. MIT Press.
Montfort, Nick. 2006. Ad Verbum. Interactive fiction. Web edition: https://nickm.com/if/parchment/index.html?story=stories/adverbum.z5.js.
Montfort, Nick. 2008. ppg256. (Perl Poetry Generator in 256 characters) series. https://collection.eliterature.org/2/works/montfort_ppg256/ppg256.html.
Montfort, Nick. 2023. “Why I Am Not an Algorist.” In Poème Objkt Sbjkt, Paris: Librairie Galerie Métamorphoses, pp. 126–131.
Montfort, Nick and Ian Bogost. 2020. Racing the Beam: The Atari Video Computer System. MIT Press.
Montfort, Nick. “Taroko Gorge.” On nickm.com, 2009; In #!, (Nick Montfort, Denver: Counterpath) 2014; With remixes, in Electronic Literature Collection, volume 3, February 2016. https://nickm.com/taroko_gorge/.
OpenAI. 2023. ChatGPT [Large language model]. https://chat.openai.com/chat.
Rettberg, Scott. “Tokyo Garage.” On retts.net, 2009. https://retts.net/tokyogarage.html.
Rowberry, Simon Peter. 2022. Four Shades of Gray: The Amazon Kindle Platform. MIT Press.
Stephens, Paul. 2015. The Poetics of Information Overload: From Gertrude Stein to Conceptual Writing. University of Minnesota Press.
Stephens, Paul. 2020. absence of clutter: minimal writing as art and literature. MIT Press.
University of Bergen. n. d. “Center for Digital Narrative.” https://www.uib.no/en/cdn.
Wardrip-Fruin, Noah and Nick Montfort. 2003. The New Media Reader. MIT Press.
This research is partially supported by the Research Council of Norway Centres of Excellence program, project number 332643, Center for Digital Narrative and project number 335129, Extending Digital Narrative.