Tempering the myth of global variety, David Golumbia processes the dominance of English in digital environments - and a highly standardized English at that.
Metadiversity: On the Unavailability of Alternatives to Information
Metadiversity: On the Unavailability of Alternatives to Information
Despite its apparent global variety, the Internet is more linguistically uniform than it is linguistically diverse. Almost all Internet traffic is conducted in one of the world’s 100 or so dominant languages, and the great majority takes place in the top 10 or so languages, with English being especially dominant due, among other reasons, to its use in the Internet’s coding infrastructure. Unwritten and nonstandardized languages, which make up the majority of the world’s approximately 6,700 languages, are hardly accounted for in the structure of Internet communication. On the worldwide distribution of languages see Grimes, Ethnologue. The emphasis in today’s Internet development on informatic models and on structured information reveals a bias characteristic not only of modern technological practices but also of modern standardized languages. This bias may limit the Internet’s effectiveness in being deployed for non-informatic uses of language, which have themselves been significantly underplayed in Western technological development and its theory.
Much cultural analysis of the Internet focuses on information - loosely, what is typically thought of as “content.” That is, the analytic object is what the user sees most prominently on the page, the words he or she types into a chat interface, the articles displayed and/or the aesthetic possibilities of website creation, and the means for transmitting, storing, and replicating them. See, for example, Landow, Hypertext and Hyper/Text/Theory, Lunenfeld, Digital Dialectic, and Bolter and Grusin, Remediation, all of which problematize the informatic focus while more or less endorsing it. Lessig, Code, and Poster, The Mode of Information are the best recent attempts to think critically about the informatic infrastructure. Turkle, Second Self remains a touchstone in thinking critically about the cultural-psychological consequences of the computing environment. Also see the references in Mann, “What Is Communication.” We refer to the advent of the Internet as an “Information Revolution” and to the computing infrastructure as “Information Technology” (IT). All of this suggests that information was somehow what was in need of technological change and that the inefficient transfer of information was an obvious social problem requiring a revolution. But for the human users of the Internet, information is realized, nearly exclusively, via printed language. So in addition to being part of the computer revolution, the Internet needs also to be seen in the wider frames of human languages and language technologies, where the question of the informatic nature of language is much more highly vexed than the IT revolution would make it appear.
Rather than IT, when we talk about what may be socially transformative about the Internet, we focus just as often on social connection and community. So although the Internet is seen
principally as a valuable reservoir of information, its main contribution may one day be seen as a catalyst for the formation of communities. Since communities bound by common interests existed long before computers, it is not as if we have now entered the next stage in the evolution of society (the `information age’). Rather, computer meshworks have created a bridge to a stable state of social life which existed before massification and continues to coexist alongside it. (DeLanda, A Thousand Years of Nonlinear History, 254)
Yet Manuel De Landa himself points out that it is standardized languages in general and most of all standardized written English as a medium for technical communication that open the possibility of the Internet itself. “English became the language of computers, both in the sense that formal computer languages that use standard words as mnemonic devices (such as Pascal or Fortran) use English as a source and in the sense that technical discussions about computers tend to be conducted in English (again, not surprisingly, since Britain and the United States played key roles in the development of the technology)” (253).
De Landa sees, rightly at least in a limited sense, that the Internet is becoming a place where it can be possible for “pride of the standard [to be] seen as a foreign emotion, where a continuum of neo-Englishes flourishes, protected from the hierarchical weight of `received pronunciations’ and official criteria of correctness” (253-4). But the boundaries of this continuum are narrow precisely because it is neo-Englishes rather than a diversity of world languages that flourish. It is no accident of history that the programming and markup languages that structure the Internet are almost exclusively written in standardized fragments of English, especially as English has been revisioned into the sub-languages of logic and mathematics. I discuss this at greater length in Golumbia, “Computational Object.” Also see Lyotard, Postmodern Condition. It is, rather, characteristic of these historical developments and of their constitutive relation to modern identity itself. It appears, at best, premature to suggest that systems constructed within such highly formalized, abstracted and, in an important sense, fictional structures could be responsible to the texture of human language - a texture whose variety we have scarcely begun to apprehend. Reddy, “The Conduit Metaphor,” remains the single best articulation of the distance between the formalized communicative object and actual linguistic practice; also see Lakoff and Johnson, Metaphors We Live By and Philosophy in the Flesh, and Mann, “What Is Communication.” (But which is at the same time familiar enough that we all understand the degree to which computers continue to fail to do anything very close to producing or understanding spontaneous human language.) For despite the appearance created in no small part by programming languages themselves, human languages need not be abstracted, one-to-one, univocally interpretable, or structured much like systems of propositional logic. In fact, these characteristics are rare across the languages we do find in human history and contemporary (but not, in this case, necessarily modern) social life. See Golumbia, “History of `Language.’” Rather than a medium for transmitting and sharing human language, then, we must be prepared to see the proliferation of computer networks as part of an ongoing effort to shift the basis of language use toward one appropriate for an informatic economy. As discussed in Golumbia, “Hypercapital.” It is the constitutive power of this phenomenon to which we must learn to be especially attentive.
There is a curious lack of fit between the phenomenon called hypertext examined as an abstract or theoretical object, and hypertext as it is used on the Internet. As the term has been advanced in academic writing, hypertext refers to what might be thought of as a multidimensional intra-document linking system that helps us to “abandon conceptual systems founded upon ideas of center, margin, hierarchy, and linearity and replace them by ones of multilinearity, nodes, links, and networks” (Landow, Hypertext, 2). Taking as paradigmatic a particular kind of interactive narrative, including the works of Michael Joyce and the program Storyspace, these theories stress the ways in which “hypertext… provides an infinitely re-centerable system whose provisional point of focus depends upon the reader, who becomes a truly active reader in yet another sense” (Hypertext, 11).
To be sure, these distributive, informational networks do exist, but it is also fair to say that they are not the rule in terms of contemporary uses of hypertext. As the Web has matured, another and perhaps much more obvious usage of hypertext dominates, in which stability, centering, order, and logic are not necessarily resisted but may in fact be reinforced. Today’s web pages use hypertextual linking primarily to drive navigation in and among complete, stable, “sticky” application interfaces. This is what drives both standard and personalized portal pages. A personalized news page on a portal site such as Yahoo!, for example, consists of headlines in many areas of world and local news, divided into categories and subcategories that are intensely logical, that are in fact derived from a culturally-preconstructed taxonomy from which dissent is difficult to conceptualize, let alone practice. So the fact that some kinds of interesting and potentially transformative constructions are possible within a given medium should not distract us from understanding how the medium is actually being used, especially when these uses are very large-scale and very directly implicated in the production of contemporary subjectivities.
On our Web, HTML and hypertext are used to create rich, absorbing navigational experiences that instruct the user to stay where they are, with only occasional side glances to alternate information sources. Organizations focus workers’ daily experiences around wide-area websites, confirming exactly the identitarian structures that hypertext might be thought to resist. Every student, teacher, office worker, engineer, professor is compelled to have a relation to these stable, compelling, relentlessly logical interactive presences, in which documents are not so much intercut with each other as presented in orderly, menu-based groups.
In fact, it is odd that, instead of HTML, we speak of hypertext when we try to locate the salient analytic object in digital textuality. On reflection, HTML really does define what happens on the Web to an astonishingly large degree, and HTML is far more defined and linear than the word “hypertext” would suggest. HTML is typically used to structure the page, and the user’s experience of the page, so as to lead the user in a particular direction with particular goals in mind. That these goals are so often commercial and so often transaction-oriented seems to expose, to literalize, the most profound aspects of the Marxist critique of ideology in language. HTML surrounds “written” electronic language with a literal meta-language, whose goal is overt and unavoidable: to structure explicitly the page’s functions.
While the ability of HTML to create links between documents and parts of documents is critical to the Web, it is also merely one of a large set of programmatic features available to the web page writer, all of whose purpose is to help create structure. To some degree this is content-neutral; obviously no particular paragraph of writing is barred from being surrounded with
tags. But the entire set of HTML tags is deliberately built up from a system whose purpose is to structure information for cataloging and retrieval: to mark each and every piece of linguistic source data with some kind of markup tag that allows post-processing and redelivery. In this way language is constrained within the informatic paradigm on the Internet to a surprising degree.
3. Structured Information
HTML (HyperText Markup Language) is typically thought of as a kind of design tool, and of course it is. But HTML is also a tool for structuring information: for applying general metadata to all the elements in a presentation set. “Structured information is information that is analyzed…. Only when information has been divided up by such an analysis, and the parts and relationships have been identified, can computers process it in meaningful ways” (DeRose, “Structured Information,” 1). HTML was in fact written originally by Tim Berners-Lee as a kind of simplified version of a language in which contents are explicitly tagged with meaningful metadata, called SGML for Standard Generalized Markup Language. See the World Wide Web Consortium’s (W3C) web pages on HTML, e.g., http://www.w3.org/MarkUp. SGML was developed for engineering and military documentation, in which it is assumed that every piece of information needs to be indexed for rapid retrieval and cross-matching. Robins and Webster, Times of the Technoculture, provides an excellent overview of some of the direct military interests involved in the information revolution; also see De Landa, War, and Poster, Mode of Information.
Today HTML is used to apply structure to the general linguistic environment of the Internet. The primary structuring use of even the specific function known as hyperlinking is not that of connecting disparate documents or alternate paths through multidimensional content. Rather, linking is used for menus and other navigational elements. The big tabs at the top of the Amazon.Com page that allow the user to choose among Books, Video, and Lawn Tools are the meat of hypertext. The categories themselves are not arbitrary, but instead are generated out of much more highly-structured data environments (databases). See Poster, Mode of Information, especially Chapter Three, “Foucault and Databases: Participatory Surveillance.” These tabs can even be thought of as a kind of exposure of the metadata environment of the website. In a commercial Web operation like Amazon.Com, this activity is inherently interactive with the user’s patterns of spending, such that the entire structure of the hypertextual experience is laid in place by explicit logical programming rules, which operate ideally out of the realm of conscious comprehension. You don’t know why the website seems to reflect categories that occasionally grab your interest or reviews of books that you have been wondering about.
The inherent structuring of HTML has been built on in recent technology by the advent of increasingly powerful dynamic web page generation language standards (such as Java Server Pages, Active Server Pages, and Cold Fusion pages - each of which can be identified by noting the presence of the extensions.jsp,.asp, and.cfm respectively in web page URLs). These technologies allow the incorporation of database content directly into what look like static HTML documents. They are very literally the language out of which the Web is largely delivered, for academic journals no less than e-commerce sites. Because these meta-rules are applied within the text of the apparent display language, they further blur the distinction that allows us to think of source code as metalinguistic and web page content as ordinary language - content.
Currently, the W3C has nearly finished the articulation of XHTML, a set of standards that allow all HTML content to be rewritten within XML-based contexts. XML stands for eXtensible Markup Language, and represents an explicit attempt to replicate the meta-linguistic tagging properties of SGML widely throughout the Internet (XML is actually a simplified form of SGML, although it has been extended beyond this original base). The standard pocketbook definition (literally) says that XML is a “meta-language that allows you to create and format your own document markups…. Thus, it is important to realize that there are no `correct’ tags for an XML document, except those you define yourself” (Eckstein, XML, 1).
That is, XML is a set of standards for expressing metadata in any form chosen by the programmer. Any viable set of categories should inherently be able to be realized in an XML implementation. In practice, of course, XML documents, especially their large-scale programmatic elements, are written exclusively in English (although the standard allows content to be written in any language, and some levels of tagging are certainly written today using European languages). More importantly, XML is rarely used by individuals or even community groups to create ad-hoc data structures; to the contrary, XML is most widely used by businesses to structure content for electronic commerce, and also for more directly technological applications. In these applications a standards committee drawn from members of prominent businesses and institutions within the appropriate domain is convened. The committee issues successive standards, which dictate exactly how content issued within the industry should be marked up. The neutral standards-based web page known as XML.org promotes itself as “The XML Industry Portal,” and offers pointers to standards for using XML within social domains as widely dispersed as Data Mining, Defense Aerospace and Distributed Management. See http://www.xml.org. The Oasis-Open project at http://www.oasis-open.org is currently the locus for the promotion of Structured Information on the Internet. In fact, not surprisingly, SGML itself has survived in no small part due to its applicability in military engineering projects, where parts, features and functions are categorized to an exorbitant level.
In practice, then, the proliferation of XML and XML-like markup strategies suggests a remarkable degree of institutionally-controlled standardization. By incorporating display standards like XHTML into current web pages, developers can ensure the thorough categorization of every aspect of Web content. Rather than a page, the screen breaks down into more or less discrete units, served up in interaction with masses of data and statistical sampling that are by definition not available for the user to examine or understand. Instead, through such probability- and category-driven conceptions of “personality,” subjectivity itself is presented whole, pre-analyzed, organized, almost always around a central metaphorical goal, usually an economic one. For examples see Birbeck, Duckett, and Gudmundsson, Professional XML, and Fitzgerald, Building B2B Applications. The user is free to choose whether she is interested in Sports or Finance, Hockey or Baseball, the Detroit Red Wings or the Seattle Seahawks. But she is hardly free to reassemble the page according to different logics, different filtering approaches, applying critical logic or any sort of interpretive strategy to the AP Newswire or Dow Jones news feed. This informatic goal instances itself in every aspect of the web page presentation, cultural-cognitive streambeds in which the water of thought almost unavoidably runs. It is not clear that our society has effective mechanisms for evaluating the repackaging of our language environment in this way, in the sense of allowing a large group of technicians and non-technicians to consider deeply its motivations and consequences.
Metadiversity is a term that fails to mean what we need it to. The term has been introduced by information scientists and conservation biologists to indicate the need for metadata resources about biological diversity, no doubt a critical requirement. But the term metadiversity suggests something else - a diversity of meta-level approaches, or even more directly, a diversity of approaches, of schemes, of general structuring patterns. Seen from the perspective of linguistic history, the linguistic environment of the Internet seems to offer not a plethora of schemes but a paucity of them, clustered around business-oriented and even military-based informatic uses. The language technology developed for the Web is primarily meant to make it easy to complete a transaction, close a deal, accept a payment; it is less clearly meant to facilitate open and full speech, let alone to foster a true diversity of approaches to language.
The history of language is rich with examples of structural alternatives to our current environment. These examples include phenomena found in what are today known as “polysynthetic” and other primarily oral languages. Such languages display grammatical and lexical differences from English, from European languages, and even from some modern non-European languages like the dominant languages of Asia. The languages stand in ambiguous relation to the kind of form/content split that has ground its way thoroughly into Western language practice, so much so that no less a linguist than Roy Harris can suggest that the triumph of computers represents the triumph of a “mechanistic conception of language” (Language Machine, 161). This is not some isolated ideology that can be contained within the technical study of linguistics (whose participation in the system of disciplinary boundaries is already highly problematic), though its presence in linguistics is clear and unambiguous. It extends outward in every way to the culture at large, providing models of subjectivity for a great percentage of those who provide so-called intellectual capital for international business. The ideology precisely provides form for subjectivity, suggesting to many normative individuals that existence itself is subject to binary thinking and unitary pursuit of goals.
In the most curious way, this ideology reveals its power through a kind of strong misreading. Just as the term metadiversity is in effect encapsulated against its most direct lexical content, so the apparent homology between modern information networks is misrendered, resulting in a highly teleological area of research known loosely as bioinformatics. Thinking broadly of the effects various telematic changes have had on the development of modern consciousness, Gayatri Spivak writes that “the great narrative of Development is not dead. The cultural politics of books like Global Village and Postmodern Condition and the well-meaning raps upon raps upon the global electronic future that we often hear is to provide the narrative of development(globalization)-democratization (U.S. Mission) an alibi” (Critique, 371). The marriage of the deep biological/machine metaphor and the development narrative produces a desire to make information live, to replace and translate the units of biological information (genes) with those of an artificial, formal linguistic system, but which somehow manages always to work in accordance with the needs of transnational capital.
We see the marks of this deep ideology everywhere in culture, where it almost unfailingly works to support the processes of globalist, nationalist development (even where it merely comes down to the more local politics of academic disciplines) and against the claims of marginal, deterritorialized, often de-languaged minority groups. See Grenoble and Whaley, Endangered Languages, and Skutnabb-Kangas, Linguistic Genocide. The deep metaphors at the heart of Chomsky’s writings have lately pushed closer to the surface, so that he now thinks of language in terms of “perfection” and “optimality.” “The language faculty might be unique among cognitive systems, or even in the organic world, in that it satisfies minimalist assumptions. Furthermore, the morphological parameters could be unique in character, and the computational system CHL biologically isolated” (Minimalist Program, 221). This bio-computer, unique in nature (but ubiquitous in modern thought and fiction), must be characterizable in terms of algebraic or otherwise formal rules, which take their form not from human language but from the logical abstractions on which computers are built. It is no surprise that Chomsky’s writing has lately started to use as core terms, in addition to abstract words such as Move and Derivation, terms which sound derived directly from programming languages. The Minimalist Program invokes Select (226ff.), Merge (226ff.), Spell-Out (229ff.), and perhaps most tellingly, Crash (230ff.), which happens “at LF [Logical Form], violating FI [Full Interpretation]” (230) - all terms with wide applicability and use in various domains of computer science and programming languages. (From this small historical distance, it now seems hard to construe as accident that just as the use and development of the computer really takes off at MIT, so does the theory that language should be understood primarily as the stuff that computers understand - symbols manipulated by a logical processor. This is made clearest in Huck and Goldsmith, Ideology and Linguistic Theory, and Harris, Linguistics Wars, though it requires some interpretation of either of these works to arrive at the point I am making here. Also see Harris, Language Machine, Lyotard, Postmodern Condition, and Turkle, Second Self. It is also no accident that much of this research was directly funded by the military for the express purpose of getting machines to understand speech, presumably for intelligence purposes. See Harris, Linguistics Wars, and De Landa, War, but also see the footnotes and endnotes of many of the early works of generative grammar in which military funding is explicitly mentioned. It is, for example, an odd note of linguistico-political history that Chomsky’s principal mid-sixties work, Aspects of the Theory of Syntax, “was made possible in part by support extended the Massachusetts Institute of Technology, Research Laboratory of Electronics, by the JOINT SERVICES ELECTRONICS PROGRAM (U.S. Army, U.S. Navy, and U.S. Air Force) under Contract No. DA36-039-AMC-03200(E)…” (Aspects, iv).
Within the field now called bioinformatics, misapplication of the bio-computer metaphoric cluster runs rampant, often mapped very precisely onto the direct-forward telos of capital. Most familiarly, the term refers to the collection of genetic data in computerized databases - where it already bleeds over into the ambition to read the human genome like a book, like a set of explicit and language-like instructions, again construing language explicitly as an information-transfer mechanism. Eugene Thacker discusses this aspect of the phenomenon briefly in his “Bioinformatics.” Perhaps the genes truly are like human language - in which case they would appear full of systemic possibilities, none of which are realized in similar or equipotent or equally meaningful ways. (Or maybe genes really are informatic, in which case the reverse cautions might also apply.) What would seem plainest on a dispassionate consideration of intellectual history is that there are probably all sorts of ways of processing genetic material that will not be at all obvious or literal. This leads implacably to the conclusion that, because we seem unable to consider what we are doing prior to operating, we are no doubt even now rewriting scripts whose meanings we scarcely know.
Would that this were the only place in which the bio-computer ideology drives us forward. But in fact other programs, also referred to as bioinformatic, grow not unfettered but with the explicit prodding of military and capitalist interests. These programs include efforts to create “living” programs, code that repairs itself, genetic algorithms, “artificial life,” and many others. See, for example, Brown, Bioinformatics, Holland, Adaptation in Artificial Systems, and Vose, Simple Genetic Algorithm. Of course many of these programs prove to be nearly as science-fictional as they sound, over time, but the fact that they exist as serious human propositions at all seems to me quite startling, and quite characteristic of the lack of metadiversity in our linguistic environment. In every case the motivation and the justification proceed hand-in-hand from remarkable, in-built assumptions about the inherent good in exploring basic natural phenomena via simulation and mimicry. I am not suggesting that such research is wrong, although I do hope it is less transgressive than it seems to want to appear. But it seems to me that an alternate perspective, derived from a cultural politics of the biological and linguistic-cultural environments, suggests that these research programs are profoundly ideological extensions of the public mind, rather than dispassionate considerations of possible roles for sophisticated linguistic tools in the human environment.
From such a perspective, in fact, what is striking about our world is not the attainments of our one linguistic society but the multiple, variant approaches to social reality encoded in the many thousands of human languages and cultures over time. As emblematic as the Internet is, it can be no more representative of the language environment than are the many linguistic technologies that have been systematically pressed out of modern awareness - and the fact that it is so heavily promoted by institutions of authority should, despite all the Internet’s attractions, give us pause. Reflecting on the natural world it seems hard to understand how human beings could come to any other conclusion but that part of our responsibility is to preserve so that we might understand more deeply the many natural processes that have proven themselves to be, so far, largely beyond our ken. Instead, capital insists on the vivisection - or just outright destruction - of these biological and environmental alternatives. Less well-known is the plight of linguistic variety itself, the pressure exerted by English and standardization and the networked reliance on programming and markup languages on those existing remnants of the world’s lost languages. See Crystal, Language Death, Grenoble and Whaley, Endangered Languages, Maffi, Biocultural Diversity, and Skutnabb-Kangas, Linguistic Genocide. These languages must not be thought of as simple “formal variants,” alternate ways of approaching the same underlying material (which a computational perspective might seem to suggest). Instead, they are true examples of metadiversity - systems or quasi-systems that encode not just methods of approaching social relations but of the history of the self, the constitution of identity and otherness. Thus recent evolutionary theory has begun to point, for example, to social structuring processes as linguistically generative, perhaps more so than the putative features of Universal Grammar - see, e.g., Dunbar, Gossip, and Goody, Social Intelligence and Interaction.
With respect to our linguistic environment, even a dispassionate and so-called scientific perspective, no less a cultural materialist one, suggests that what is most vital to us is our multiplicity of structural alternatives, the heterogeneity of social interpretations whose variance itself is part of what allows society to be flexible, accommodative, meaningful. This is exactly what is suggested in Abram, Spell of the Sensuous, and Maffi, Biocultural Diversity - quite literally that linguistic diversity constitutes a critical feature of the natural environment and even that the environment requires linguistic diversity to sustain biodiversity. We see again and again the record of apparently significant cultural histories characterized as myth, while one central set of metaphors derived from the success of the physical sciences continues to dominate investigation of not just the body but of human culture itself, which is to say language. See Lakoff and Johnson, Metaphors We Live By, and Philosophy in the Flesh.
Perhaps the promise of the Internet lies in the marks within it, even today, of mechanisms leading toward the creation and revitalization of alternate and variable kinds of languages and language-like formations, to some degree beyond and outside of information and communication. Of course a critical part of such formations is the raw assembling of communicative groups, such as newsgroups, chat rooms, website-based communities, and other devices wherein electronic communication is fundamentally multithreaded. Previous innovations in communication have generally been structured either on broadcast (one-to-many) communications, such as print publishing, television and radio broadcasting, where a generally powerful single entity is able essentially to create many copies of its own communications and then to distribute these widely among a population literate in the given medium. Another set of communicative technologies enable one-to-one interactions (the chief examples are letter writing, the telegraph and telephony). The Internet does encourage various and to some extent innovative kinds of both one-to-one and broadcast communications. Even more than these, however, the promise of the Internet seems to reside in its ability to facilitate something like many-to-many communicative formations. This is to approximate something not like the myriad forms of small group and peer communication that are characteristic of social groups.
In both the one-to-one and many-to-many registers we find true arenas for linguistic innovation. One reason there has been such proliferation of language in our world (prior to the work of standardized languages like English) is that both intimate and social communication, when unconstrained by institutional pressures that are especially characteristic of broadcast communicative praxes, provide especially fertile ground for experimentation and performative adoption of linguistic and cultural strategies. This seems to me in line, to at least some degree, with the approach toward identity and cultural politics found, for example, in Butler, “Performative Acts” and Gender Trouble, and Spivak, “Acting Bits/Identity Talk” and Critique of Postcolonial Reason. Outside modern institutionalized standards, language is often perceived less as a set of static elements and rules to be applied according to pre-existing constraints, and more as cognitive medium for live innovation, deconstruction, creation, interaction. See Golumbia, “History of `Language’,” and Harris, Language Machine. One reason for the proliferation of languages the world over may be that linguistic diversity correlates somewhat directly with a kind of local adaptiveness - providing both for certain kinds of local cultural homogeneity but also for a great deal of areal cultural diversity. See Abram, Spell of the Sensuous, and Maffi, Biocultural Diversity. On local cultural homogeneity, see Sapir, Language. On areal diffusion and its influence on linguistic history see Dixon, Rise and Fall of Languages. Derrida, Monolingualism of the Other, offers some provocative reflections on the consequences of monolinguality.
There exists a relatively clear historical line from the monolingual policies and technologies that have been advocated especially by the West to the current relative monolinguality of the Web. On the earlier parts of this history see, for example, Ong, Interfaces, and Orality and Literacy, and, in another register, Anderson, Imagined Communities. On the consequences of the abrupt imposition of such technologies on human societies more generally, see Mander, Absence of the Sacred. At the same time many of the phenomena descried by critics of the Web - the bad spelling caused by typing emails quickly, poor editing of “fan”-created Web pages, apparently vague “emoticons” - demonstrate the power of noncanonical language to rise above the constraints on which standardization insists, usually for the purposes of social interaction, often far above or beyond meaning per se. In addition to the social approach suggested in Dunbar, Gossip, also see the work of more recent language ideology theorists such as Kroskrity, Regimes of Language, and Schieffelin, Woolard, and Kroskrity, Language Ideologies. So does the Web’s ability to draw into interaction communities from many different language groups, including groups whose languages have not been part of the standardization process but who nevertheless wish to use the network to speak in other registers. See Crystal, Language and the Internet. To some extent, then, what seems on the surface least political about the Web may be what is most important: providing raw bandwidth to those whose voices and languages have been pushed away by standardization. (However, the relative difficulty of sustaining broadcast media technologies in nonstandard languages such as low-power radio and television stations lends some caution to this view.)
This is not exactly to argue that we should resist technological innovation altogether (though see Mander, Absence of the Sacred and Abram, Spell of the Sensuous for surprisingly compelling statements in this direction). It is to say that, in the realm of linguistic technology, it may well be the case that the stuff of spoken language itself provides a kind of bare technological matter that can help us to restructure social life in significant ways. A more effective Internet may need to be not merely written, but verbal and visual; it may need to accommodate better the full range of human sight, sound and gesture, to allow us to push beyond the linguistic constraints print and standardization have unwittingly placed on us. It may also be interesting to see if it is possible to encourage the development of new, non-roman-script linguistic representations (such as emoticons) which lack strongly standardized underpinnings. If, in fact, some kind of change in language technology is needed to create a more flexible and diverse society (as the IT revolution seems to suggest on its face), we might look just as fruitfully to the innovations produced over tens of generations by thoughtful speakers of human languages, as we do to the more short-term innovations produced in the name of the general reduction of social language to informatic technologies.
Abram, David. The Spell of the Sensuous: Perception and Language in a More-than-Human World. New York: Pantheon Books, 1996.
Anderson, Benedict. Imagined Communities: Reflections on the Origin and Spread of Nationalism. Revised and Expanded Edition, London: Verso, 1991.
Birbeck, Mark, Jon Duckett, Oli Gauti Gudmundsson, et. al. Professional XML. Chicago: Wrox Press, 2001.
Bolter, Jay David, and Richard Grusin. Remediation: Understanding New Media. Cambridge, MA: The MIT Press, 1999.
Brown, Stuart M. Bioinformatics: A Biologist’s Guide to Biocomputing and the Internet. Natick, MA: Eaton, 2000.
Butler, Judith. “Performative Acts and Gender Constitution: An Essay in Phenomenology and Feminist Theory.” Theatre Journal 40:4 (December 1988). 519-531.
—. Gender Trouble: Feminism and the Subversion of Identity. New York and London: Routledge, 1990.
Chomsky, Noam. Aspects of the Theory of Syntax. Cambridge, MA and London: The MIT Press, 1965.
—. The Minimalist Program. Cambridge, MA and London: The MIT Press, 1995.
Crystal, David. Language and the Internet. New York: Cambridge University Press, 2001.
—. Language Death. New York: Cambridge University Press, 2000.
De Landa, Manuel. A Thousand Years of Nonlinear History. New York: Swerve Editions/Zone Books, 1997.
—. War in the Age of Intelligent Machines. New York: Swerve Editions/Zone Books/MIT Press, 1991.
Derrida, Jacques. Monolingualism of the Other; or, The Prosthesis of Origin. Trans. Patrick Mensah. Stanford, CA: Stanford University Press, 1998.
DeRose, Steven J. “Structured Information: Navigation, Access, and Control.” Paper presented at the Berkeley Finding Aid Conference, Berkeley, CA, April 4-6, 1995. http://sunsite.berkeley.edu/FindingAids/EAD/derose.html.
Dixon, R. M. W. The Rise and Fall of Languages. Cambridge and New York: Cambridge University Press, 1997.
Dunbar, Robin I.M. Grooming, Gossip, and the Evolution of Language. Cambridge, MA: Harvard University Press, 1996.
Eckstein, Robert. XML Pocket Reference. Sebastopol, CA: O’Reilly, 1999. Fitzgerald, Michael. Building B2B Applications with XML: A Resource Guide. New York: John Wiley & Sons, 2001.
Golumbia, David. “The Computational Object: A Poststructuralist Approach.” Computers and the Humanities (under review).
—. “Hypercapital.” Postmodern Culture 7:1 (September 1996). http://www.mindspring.com/~dgolumbi/docs/hycap/hypercapital.html.
—. “Toward a History of `Language’: Ong and Derrida.” Oxford Literary Review 21 (1999). 73-90.
Goody, Esther N., ed. Social Intelligence and Interaction: Expressions and Implications of the Social Bias in Human Intelligence. Cambridge: Cambridge University Press, 1995.
Grenoble, Lenore A., and Lindsay J. Whaley, eds. Endangered Languages: Current Issues and Future Prospects. Cambridge and New York: Cambridge University Press, 1998.
Grimes, Barbara F., ed. Ethnologue. 14th Edition. CD-ROM. Dallas, TX: SIL International, 2000.
Harris, Randy Allen. The Linguistics Wars. New York and Oxford: Oxford University Press, 1993.
Harris, Roy. The Language Machine. Ithaca, NY: Cornell University Press, 1987.
Holland, John H. Adaptation in Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. Cambridge, MA: The MIT Press, 1992.
Huck, Geoffrey J., and John A. Goldsmith. Ideology and Linguistic Theory: Noam Chomsky and the Deep Structure Debates. London and New York: Routledge, 1995.
Kroskrity, Paul V., ed. Regimes of Language: Ideologies, Polities, and Identities. Santa Fe, NM: School of American Research Press, 2000.
Lakoff, George. Women, Fire, and Dangerous Things: What Categories Reveal about the Mind. Chicago and London: University of Chicago Press, 1987.
— and Johnson, Mark. Metaphors We Live By. Chicago and London: University of Chicago Press, 1980.
— and —. Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought. New York: Basic Books, 1999.
Landow, George P. Hypertext: The Convergence of Contemporary Critical Theory and Technology. Baltimore, MD: Johns Hopkins University Press, 1992.
—, ed. Hyper/Text/Theory. Baltimore and London: Johns Hopkins University Press, 1994.
Lessig, Lawrence. Code and Other Laws of Cyberspace. New York: Basic Books, 1999.
Lunenfeld, Peter, ed. The Digital Dialectic: New Essays on New Media. Cambridge, MA: The MIT Press, 1999.
Lyotard, Jean-François. The Postmodern Condition: A Report on Knowledge. Geoff Bennington and Brian Massumi, trans. Minneapolis: University of Minnesota Press, 1984.
Maffi, Luisa, ed. On Biocultural Diversity: Linking Language, Knowledge and the Environment. Washington, DC: Smithsonian Institute Press, 2001.
Mander, Jerry. In the Absence of the Sacred: The Failure of Technology and the Survival of the Indian Nations. San Francisco, CA: Sierra Club Books, 1992.
Mann, William. “What Is Communication? A Summary.” Posting to FUNKNET list (February 17, 2001). Archived at http://listserv.linguistlist.org/cgi-bin/wa?A2=ind0102&L=funknet&P=R391.
National Federation of Abstracting and Information Services (NFAIS). Proceedings of the Symposium on Metadiversity, 1998. Philadelphia, PA: NFAIS, 1998.
Ong, Walter J. Interfaces of the Word: Studies in the Evolution of Consciousness and Culture. Ithaca, NY: Cornell University Press, 1977.
—. Orality and Literacy: The Technologizing of the Word. London and New York: Routledge, 1988.
Poster, Mark. The Mode of Information: Poststructuralism and Social Context. Chicago: University of Chicago Press, 1990.
Reddy, Michael J. “The Conduit Metaphor: A Case of Frame Conflict in Our Language about Language.” In Andrew Ortony, ed., Metaphor and Thought. Cambridge: Cambridge University Press, 1979. 284-324.
Robins, Kevin, and Frank Webster. Times of the Technoculture: From the Information Society to the Virtual Life. London and New York: Routledge, 1999.
Sapir, Edward. Language: An Introduction to the Study of Speech. London: Granada, 1921 (Reprinted, 1978).
Schieffelin, Bambi B., Kathryn A. Woolard, and Paul V. Kroskrity, eds. Language Ideologies: Practice and Theory. Oxford: Oxford University Press, 1998.
Skutnabb-Kangas, Tove. Linguistic Genocide in Education – or Worldwide Diversity and Human Rights? Mahwah, NJ and London: Lawrence Erlbaum Associates, 2000.
Spivak, Gayatri Chakravorty. “Acting Bits/Identity Talk.” Critical Inquiry 18:4 (Summer 1992). 770-803.
—. A Critique of Postcolonial Reason: Toward a History of the Vanishing Present. Cambridge, MA: Harvard University Press, 1999.
Thacker, Eugene. “Bioinformatics: Materiality and Data between Information Theory and Genetic Research.” CTheory Article 63 (October 28, 1998).
Turkle, Sherry. The Second Self: Computers and the Human Spirit. New York: Simon and Schuster, 1984.
Vose, Michael D. The Simple Genetic Algorithm: Foundations and Theory. Cambridge, MA: The MIT Press, 1999.