Materiality and Matter and Stuff: What Electronic Texts Are Made Of

Materiality and Matter and Stuff: What Electronic Texts Are Made Of


Following Katherine Hayles, Matthew Kirschenbaum agrees that materiality matters.

I’ve found both sides of the exchange about what cybertext theory can and can’t do useful and stimulating. I’m grateful to ebr and the various participants. Here I want to push the discussion of “materiality,” a word used by both Markku Eskelinen and Katherine Hayles, and a word I myself have been using since I started writing about digital media in the mid-1990s. For materiality does indeed matter, as Hayles has said. This is precisely the point I make (and a phrase I use) in an article forthcoming in the journal TEXT that examines the textual condition of what I call “first generation electronic objects” - a class of artifacts that have no material existence outside of computational file systems, which would include electronic fiction and poetry, and other types of hypertext and cybertext works. 1.See Kirschenbaum, “Editing the Interface: Textual Studies and First Generation Electronic Objects.” TEXT 14 (2002): 15-51. Portions of my remarks here are adapted from this article.

I would maintain that neither hypertext theory nor cybertext theory yet talks about the materiality of first generation electronic objects with anything near the precision or sophistication scholars habitually bring to bear on more traditional objects of literary or cultural studies. I know because I work in an English department (one of those tweedy professors Eskelinen scoffs at); and while I consider myself a media critic as much as a literary critic, I think I’ve learned a thing or two from my literary colleagues that may help us bridge that particular digital divide. Most readers of this discussion, for instance, have probably had an occasion to refer to Michael Joyce’s afternoon, but that text’s colophon acknowledges it has been published in no less than six different versions and editions over the years, with some substantial variants between them. Which afternoon do we really mean then? And does it matter? Of course it does, or at least it should. A comparison of copies of the first (1987) and third (1992) editions of afternoon, for example, reveals a number of immediately observable differences: the third edition includes a bitmapped graphic on its electronic frontispiece; the number of textual nodes has increased marginally, from 536 to 539; the number of links, however, has increased by nearly a hundred, from 854 to 951. The electronic size of the work has also grown, from 235 kilobytes in the first edition to 375 kilobytes in the third. There are also marked differences in the text’s presentation across platforms: one might note that the appearance of the afternoon desktop icon differs dramatically between the Mac and PC. Look:

The afternoon icon (Mac); Copyright 1990 Eastgate Systems

The afternoon icon (PC); Copyright 1993 Eastgate Systems

Literary scholars regularly edit (and also remediate) the texts of the past to ensure the persistence of literary canons, but how would one edit afternoon, historically important for its status as the first full-length work of electronic fiction? Would one do it in print, as the Norton Anthology of Postmodern Fiction has done? As a software emulation of the original Storyspace environment, as Norton’s Web-deliverable version of that same anthology tried to do, using JavaScript to recreate the behavior of guard fields and the like? By running an authentic copy of afternoon on an antique Mac, lovingly preserved in a climate-controlled computer lab, with access to the mouse and keyboard restricted to credentialed scholars conducting serious archival research? By using only non-proprietary data standards like XML/XSL, distributed under the terms of the open source community’s General Public License? These questions have been touched on before, particularly in the arena of digital preservation, but I pose them here with a broader agenda in mind. In particular, I want to ask what it means to treat an electronic text such as afternoon as a textual artifact subject to material and historical forms of understanding.

To ask such questions - in effect, to take electronic texts seriously as texts - lays the groundwork for a theory of electronic textuality that departs widely from the existing approaches to the subject. The community that I believe has furnished us with the best accounts of texts and textual phenomena is neither hypertext theory nor cybertext theory, but the textual studies community. By “the textual studies community” I mean those scholars who practice textual criticism to produce critical editions as well as those who practice descriptive or analytic bibliography. Over the last fifteen to twenty years, this community has been the scene of an intense theoretical conversation on the nature of texts, textual transmission, and textual representation. A very selective list of participants in that conversation might include Betty T. Bennett, George Bornstein, Morris Eaves, Neil Fraistat, D. C. Greetham, Joseph Grigely, Jerome J. McGann, D. F. McKenzie, James McLaverty, Randall McLeod, Peter Shillingsberg, Martha Nell Smith, G. Thomas Tanselle, and Marta Werner among the many others who have published in the pages of Studies in Bibliography, TEXT, and the anthologies that have appeared since the mid-1980s. Yet this work has remained mostly invisible to hypertext and cybertext criticism and theory, despite some obvious shared interests.

Given the origins of textual criticism and bibliography in the study of printed matter like manuscripts and books, the premise that its deliberations are relevant to digital content may seem odd and counter-intuitive. But in fact, textual criticism and bibliography should be recognized as among the most sophisticated branches of media studies we have evolved. Only the most literal-minded reader could think that because they have historically focused on parchment and paper these disciplines have nothing to say to the new artifacts and technologies of the digital age. (And here I must say that Eskelinen’s apparent ignorance of textual studies makes his remarks about English professors seem all the more parochial.)

Let me develop some sense of what textual studies has to offer to the present exchange. Michael Joyce is fond of the maxim: “Print stays itself; electronic text replaces itself.” 2.Michael Joyce, Of Two Minds: Hypertext Pedagogy and Poetics. (Ann Arbor: University of Michigan Press, 1995), 232. Likewise, Jay David Bolter, in his widely read study Writing Space, asserts:

Electronic text is the first text in which the elements of meaning, of structure, and of visual display are fundamentally unstable…. This restlessness is inherent in a technology that records information by collecting for fractions of a second evanescent electrons at tiny junctures of silicon and metal. All information, all data in the computer world is a kind of controlled movement, and so the natural inclination of computer writing is to change, to grow, and finally to disappear. 3. Jay David Bolter’s Writing Space: The Computer, Hypertext, and the History of Writing (Hillsdale, NJ: Lawrence Erlbaum, 1991), 31.

I use these examples from two of hypertext’s most prominent critics and theorists because they echo common refrains in much of the current writing about electronic textuality: that it is fundamentally volatile and unstable, and that these characteristics are themselves the product of the radical new ontologies of the medium. But is that really the most salient point? Textual scholars, after all, have long understood that all texts (and not just electronic texts) are capable of “replacing themselves”; nor is it actually true (if one inverts Bolter’s formulations above) that elements of meaning, of structure, and of visual display in printed texts are fundamentally stable. The opposition between fixed, reliable printed texts on the one hand, and fluid and dynamic electronic texts on the other - an opposition encouraged by the putative immateriality of digital data storage - is patently false, yet it has become a truism in the nascent field of electronic textual theory. Marie-Laure Ryan, for example, in a recent essay on “Cyberspace, Virtuality, and the Text,” undertakes to list some of the defining characteristics of printed versus electronic media. In her first column (print), Ryan includes such words as: “durable,” “unity,” “order,” “monologism,” “sequentiality,” “solidity,” and “static.” In her second column (the virtual) she opposes these with: “ephemeral [the very first item on the list],” “diversity,” “chaos,” “dialogism,” “parallelism,” “fluidity,” and “dynamic.” 4. Marie-Laure Ryan, “Cyberspace, Virtuality, and the Text,” Cyberspace Textuality: Computer Technology and Literary Theory, ed. Marie-Laure Ryan (Bloomington: Indiana University Press, 1999), 101-2. I cite Ryan (whose work I generally admire) because it is an example of the extent to which otherwise perceptive observers of the new media have failed to take notice of the most basic lessons textual studies has to teach. Ask a Beowulf scholar whether printed matter is really “durable” or “orderly” (the sole surviving manuscript of the poem was thrown singed and smoldering from a window during a library fire in the early 18th century, rendering portions of it illegible) or a Wordsworthean whether the texts of The Prelude (there are four of them, a two-book, a five-book, a thirteen-book, and a fourteen-book version) are “static” and exhibit “unity.” Those are notspecial cases (pedantic exceptions to some normative textual condition), and the tendency to elicit what is “new” about new media by contrasting its radical mutability with the supposed material solidity of older textual forms is a misplaced gesture, symptomatic of the general extent to which textual studies and digital studies have failed to communicate.

If we acknowledge that printed texts do not “stay themselves,” we should also ask what it means for electronic texts to “replace themselves.” The critical discourse surrounding digital technologies - often taking its cues from post-structuralism - has embraced their putative ephemerality, as if we must surrender ourselves to the eventual loss of our most precious data in order to realize the medium to its full potential. I want to suggest that there is a kind of “Romantic ideology” at work in this view of electronic textuality, and that it is a view which we can ill afford to maintain much longer. 5. “Romantic ideology” is Jerome McGann’s phrase. First, there are the pressing questions of digital preservation I touched on above: sooner or later “the natural inclination” of electronic information “to change, to grow, and to finally disappear” will cease to function as an aesthetic conceit and become instead a full-blown cultural crisis (from many perspectives it already has). Moreover, recent critical work by scholars such as Johanna Drucker and D. C. Greetham should lead us to question language such as “restless,” “fractions of a second,” “evanescent electrons,” and above all “natural inclination,” for such language serves to mystify what are in fact exquisitely precise, calculated, and controlled processes at the computational level. Drucker, for example, unequivocally pinpoints the material basis of digital objects simply by following the kind of reference Bolter makes to “tiny junctures of silicon and metal” to its logical conclusion:

“Code” always contains a stored electronic sequence that includes the address of any particular piece of information - thus the binary sequence, the ultimate “difference” which constitutes the identity of any data in code storage, is also always topographic, place specific, sited, and thus a location within the mapped territory of the machine’s circuit/real estate. 6.Johanna Drucker, “The Ontology of the Digital Image,” Reimagining Textuality: Textual Studies in the Late Age of Print, eds. Elizabeth Bergmann Loizeaux and Neil Fraistat (Madison: University of Wisconsin Press, 2002). Quotation here is from a draft copy supplied to me by Drucker, manuscript page 18.

Likewise, Greetham demonstrates the painstaking story-boarding and mouse-driven manipulations that must be brought to bear in order to bring off even a modest display of a supposedly spontaneous digital effect such as morphing. 7. See Greetham, “Is it Morphin’ Time?” Electronic Text: Investigations in Method and Theory. Ed. Katheryn Sutherland. Oxford: Clarendon Press, 1997. Much the same could also be said of hypertext fiction: as anyone who has ever used Storyspace will know, its Barthesian “writerly texts” (celebratedby critics such as George Landow) are the product of non-trivial authorial effort to create links, guard fields, and so forth.

Hayles is right that it is easy to snipe at the positions of earlier critics, and she is certainly right that their initial contributions are profoundly enabling, in that they are necessary for the field’s ongoing work. I am simply suggesting that in the long run we do electronic fiction and our critical understanding of electronic textuality no favors by romanticizing the medium through a dated discourse of play that is really only screen deep. I contend that textual criticism and bibliography offer an alternative to post-structuralist discourse precisely because these disciplines provide us with the intellectual precedents and critical tools to account for first generation electronic objects as functions of the material and historical dimensions that obtain for all artifacts. Significantly, a bibliographical/textual approach calls upon us to emphasize precisely those aspects of electronic textuality that have thus far been neglected in the critical writing about the medium: platform, interface, data standards, file formats, operating systems, versions and distributions of code, patches, ports, and so forth. For that’s the stuff electronic texts are made of.

To a casual observer, such an agenda may sound dreary indeed, as dry as talk of lemmatization or collation formulas is to many of my literary colleagues. But we should keep in mind, as Jerome McGann writes, that bibliographical and textual studies are “the only disciplines that can elucidate the complex network of people, materials, and events that have produced and that continue to reproduce the literary works that history delivers into our hands.” 8. Jerome J. McGann. “The Monks and the Giants: Textual and Bibliographical Studies and the Interpretation of Literary Works,” Textual Criticism and Literary Interpretation, ed. Jerome J. McGann (Chicago: University of Chicago Press, 1985), 191. The relevance of that “complex network of people, materials, and events” that lies behind textual production is only amplified in electronic settings (this present exchange is a case in point). By contrast, we might call the belief that electronic objects are immaterial simply because we cannot reach out and touch them the “haptic fallacy.” So, my aim is not to sap the excitement of the new medium by confining its discussion to clinical arcana, but rather to demonstrate the extent to which ignoring the material basis of electronic objects only obscures their media and their make-up.

(Read Scott Rettberg’s and N. Katherine Hayles’s responses to Eskelinen and others.)

(Further discussion of electronic literature can be sent to ebr.)