Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, and Google
Now: How will our encounters with these intelligent personal
assistants - robots we’ve invited into our homes to speak with and
listen to us, who share this data with vectorialist institutions
that monitor our networked transactions - alter both human language
and our efforts to lead meaningful lives? In a wide-ranging,
philosophical essay that exposes various myths of computation while
presenting a candid assessment of the rapidly evolving culture of
reading, poet John Cayley speculates that literature will be
displaced by aurature. Listen up, readers: A major challenge in the
programming era will be to develop linguistic aesthetic practices
that intervene significantly and affectively in socio-ideological
spaces thoroughly saturated with synthetic language that are
largely controlled by commercial interests. The time for aesthetic
experiments that disrupt the protocols of a still-nascent aurature
This essay was reprinted in part for the Handbook on Electronic Literature (Bloomsbury 2018).
The actual ends of ‘electronic literature’ are implied by a name that embraces its supposed means. 'Electronic' refers to means in a way that is well understood but promotes quite specific means as the essential attribute of a cultural phenomenon, a phenomenon that was once new, a new kind of literature, a new teleology for literary practice, an 'end' of literature having its own ends, the end of electronic literature in its means, misdirected ends justified by misappropriated means. This brief essay will not remain bound up within the conceptual entanglements of a name.1 We will move on from 'end(s)' to means, to media, and finally — as we shall see — to medium.2 We understand that 'electronic' in 'electronic literature' — now indisputably one end of a field of serious play for the theory and practice of literature — refers metonymically to computation and all its infrastructure: hardware, software, interface & interaction design, networking, and today also, since at least the mid 2000s, to a particular de facto historically-created world built from all of this infrastructure within which most of us now 'live' for a considerable portion of our lives, our cultural and, predominantly, our commercially implicated, transactional lives.
The existence of a particular world, or, to use a less charitable if more accurately constrained term, a regime of computation is worth recalling as we establish some context for what follows.3 One of the myths of computation is that that its artifacts are indeterminate, open, 'free' — configurable at the level of the individual person or, indeed, process and operation. The origin of this mythic power is the understanding that, in principle, a computer is what I have preferred, for decades now, to call a programmaton, a programmable device, the inchoate delivery and compositional hardware of programmable media (Cayley 1998). The programming in question may, in principle, be done by anyone who owns such a device and learns its code. In practice, this has had the consequence that electronic literature has been characterized by extraordinary variety and novelty at all levels of compositional form. When an author sets out to make an electronic literary artifact using a computational system, a major part of the pleasure and excitement is that of discovering some number of the virtually limitless ways in which computation and its affordances may change, inflect, modify, and even undermine or destroy existing literary forms and practices. The author is also likely to assume that this pleasure and excitement will be shared, for its own sake and regardless of traditional expectations, by the projected readers of the work. Moreover, there are critics and scholars near at hand who are willing to give unreserved positive evaluations of such work, regardless of how or even whether its language is read, so long as it gives actual, embodied — if media specific — form to the genii of the myth; so long as it is work that — formally at least — instantiates indeterminacy, openness, freedom, any and all of the new ends of literature.4
The problems engendered by these circumstances are substantial, enough, some might argue, to presage an end to electronic literature. Here, we must try to state the problems clearly before offering a happier ending. There is the challenge to reading. This is at least twofold. Formal bewilderment discourages reading and readers. Reading is a learned practice; it is not innate to the human animal (Dehaene 2009, Hurford 2014). Asking readers to learn new forms is asking them to extend their learning rather than immediately offering them aesthetic experience. Of course, some formally innovative artifacts will be of a quality or importance that necessitates and rewards extra learning and effort. Literary culture moves on. But how will readers pick and choose amongst forms when every artifact is formally distinct if not entirely outside of any pre-existing formal categories? And how are they to discover any quality or importance for the language of the work if formal bewilderment makes it difficult or impossible for them to read? Secondly, there is the confusion that arises when electronic literary artifacts are esteemed by scholars and critics regardless of these artifacts’ readability in terms of language. The way that computation facilitates — often via one and the same compositional platform — the introduction of other media into the work (here, I mean the plural of medium, typically: visual, graphic, cinematic, sonic, musical) provides further confusion. Computation appears to offer us such articulation of recombination and structuring with respect to other media that we begin to think of their composition as events of ‘language’ in more than a merely metaphorical sense.5 They become literary in themselves, new ends of literature. It becomes enough, we say, to ‘read’ this new ‘language’ and less important to have read any language that happens also to have been written into the work, which language may, meanwhile, have collapsed into the visual or sonic ‘images’ of other, non-linguistic media, of light and sound.
We will return to the practice of reading because, to state it clearly, reading is constitutive of language. We may argue about how and what we read, but it is nonetheless axiomatic: no reading, no language. If there is no reading then we have reached the end of all literature, and electronic literature may have helped to get us there.
Complementary problems arise when we turn to face the world that has developed, historically, culturally, to receive and sometimes publish the artifacts of electronic literature. The myth of computational media’s indeterminacy, openness, freedom has become just that, still affectively powerful, but merely a story from the hyper-distant past. The actual world of computation within which we now dwell has an architecture that is as substantial and determinative as that of bricks and mortar. It may be, technically, ‘softer,’ but capital and power and influence — I call their totalizing amalgamation Big Software — are required in equal measure — relative to the age of print — in order to effect change on a scale commensurate with, for example, urban planning, large corporate operations or, crucially, the creation and maintenance of institutions of any moment.
The crucial sociopolitical and economic contexts for this terrifying change have been signaled and set out by, amongst others, McKenzie Wark, David Golumbia, and Bernard Stiegler (Wark 2004, Golumbia 2009, Stiegler 2010). A literature of engaged, scholarly critique surrounding these issues is maturing steadily and will be important to scholars and practitioners who are preparing (for) the end(s) of electronic literature. As Wark suggests — to outline, very briefly, my own current reading of this literature — there is now an emergent if not established class of vectoralists. These are chiefly corporate powers and are relatively unregulated because they are network-enabled, transnational and, by definition, representative of a paradigm shift in the structures of polity. Vectoralism operates within what Golumbia calls the ‘regime of computation’ and in accordance with certain of its questionable assumptions. In the post Second World War period a more or less reductive and scientistic approach to brain, mind, behavior and hence to culture at large, grew out of research on cybernetics and information, an approach that tends to foreclose, in Golumbia’s view, more progressive engagements with social and political issues, and now does so in terms of what some more popular commentators describe as ‘solutionism’ (for example: Morozov 2013). Vectoralists — glossed as society’s computationally-enabled, network service providers of solutions to everything — are a new ruling class which exploits the productive labor of a co-emergent class of ‘hackers.’ For Wark, networked computation sites human life as also within a ‘third nature’ that is constituted by flows of information. These may be ‘hacked’ to generate vectors of cultural and commercial interest. Vectoralist power has succeeded swiftly and spectacularly in aggregating and controlling these vectors. They own and exploit them as a new means of production in the ever-growing attention-led economy of third nature. Stiegler, meanwhile, sets out one of the most telling critiques of these circumstances within a philosophical framework that encompasses a history of technics/technology, particularly mnemotechnics, as inherently co-constitutive of human life and culture. He discovers us — human persons — threatening to lose the savoir vivre that allows us to take care of the vital cultural institutions which are challenged and threatened by vectoralism.6
Most commercial publishers are and will remain large corporations, although highly adaptable to the new vectoralist polity. If certain network-enabled publishing ‘platforms’ are smaller — the size of individual persons even — and more independent, these less conventional publishers will nonetheless still need to compete and transact with massive, vectoralist institutions. It is now also the case in developed, networked societies, that sociopolitical frameworks for culture and practices of cultural engagement are realized within the constraints of Big Software’s architecture. This is true for literary practice, even when the architecture has not been constructed by or on behalf of any practices associated with publishing as previously understood.7 Escaping the gravity of print culture does not necessarily imply that electronic literature is no longer in the orbit of material cultural architectures — ancient, modern and contemporary. What this does mean is that the contemporary and evolving cultural practices of reading — what reading is and will become — will be determined not by the innovators of electronic literature; they will be determined by those cultural power brokers who build and control the Big Software architecture of reading.
Electronic literature emerged, ostensibly, as a radical, not to say revolutionary engagement with the practices of linguistic art. It presumed that the application of computationally-realized affordances to reading and writing would demand, more or less universally, entirely new forms, institutions and aesthetics. In the early days of electronic literature’s critical self-consciousness, it was actually-existing hypertext that made these demands, and the fate (or destiny) of hypertext shows very clearly how new forms and institutions are hacked into the material cultural architectures of vectoralist regimes. If hypertext was not necessarily literary — as such, or with regard to literary art — its early history was intimately involved with literature as a name for documentary and archival practice (which of course includes literary practice). Arguably the first true architecture of network culture, the World Wide Web, established the preferred vectors in alignment with which hypertext became capable of operating in our everyday world. That is: utopian, ideal, or rational versions of hypertext were hacked into systems that simply worked, although they were far from realizing the radical, multivalent, two-way-linked hypertextuality of, for example, Theodore Nelson’s Xanadu, which included a scheme of micropayment for citation that might have entirely recalibrated literary commerce at the same time that it revolutionized knowledge infrastructure. Instead, hypertext quickly became ubiquitous but in forms that allowed it to be realized and practiced, along network vectors, chiefly so as to support commerce in its own terms, and more or less conventional forms of writing and reading: promotional ‘pages’ for corporations and individuals (prospectus', cv's, portfolios, résumés); blogs (journaling, op-editorializing, opinion-forming, letter-writing, commentary); journalism (as such); and other representations of essentially 'printable' literature (in the archival and documentary sense). All other possible and potential forms of hypertext remain just that: possible and potential, or else, perhaps, realized as avant-garde and peripheral experiments outside the internet’s main vectors of attention.
Occasionally, certain network vectors for less conventional, generative forms of literature have achieved a degree of momentum. In the case of expressive hypertext — with choose-your-own-adventure gaming capabilities — we can now point to Twine as a platform still gaining significant popularity. But will it ever end up supporting Twine-writers and designers commercially, or as prominent literary practitioners? Some of the most important authors of hypertext fiction from the 1990s — Michael Joyce, Shelley Jackson — have abandoned the form in so far as it is programmable and networked. Serious practitioners who persist with electronic literature must still seek the consolation of formal avant-gardism. There is always the chance that an author-innovator from these margins — from amongst the independent or institutionally patronized experimenters — will produce work in a new form and of a quality that not only demands to be read but ensures that its particular form of reading becomes so widely understood and adopted that Big Software is encouraged to embrace and support it. But up until the present time, this has not happened in any of the ways that were envisioned. 8
What has happened? We have eBooks. More to the point, people everywhere are reading eBooks. These inherently skeuomorphic cultural formations are nothing like what the community of electronic literary researchers and makers would want them to be. They represent, nonetheless, a perfectly adequate and quite distinct platform for reading. In my own case, it is now just as likely that I will have a profound literary aesthetic experience when reading an eBook as when reading a physical book. Along with those of many other friends and colleagues, I can report, albeit anecdotally, that my 'library' of eBooks is steadily growing and that I spend as much time reading from a tablet as from a printed book. Is this reading electronic? As a form or practice of reading, it is not experienced as fundamentally distinct when compared with reading from print. And yet, it is formally different, both in itself, subtly, and also with regard to real, novel affordances offered by both textual digitization and connection to the network (see below). There is also, quite simply, an increase in the total time taken for enjoyment or self-cultivation that is spent using computational devices — practicing, if not electronic literature, then unarguably digital material culture — such that the temporal economy of one’s reading has significantly increased its 'electronic' character. Readers and writers — to include digital language artist-practitioners like myself — still associate profound linguistic aesthetic experience with reading and writing as more or less conventionally understood, deriving significance and affect from encounters with streams of words, phrases, sentences, books. If reading and writing is to be pursued, and digital culture is also to be pursued, I believe there is a real tendency to feel that one’s desire to pursue the digital is satisfied when digital formats are deployed simply so as to allow for those persistently desirable pursuits of reading and writing. My passive linguistic aesthetic ambitions and desires were quite satisfied when I read Ben Lerner's 10:04 as an eBook and, I suggest, a certain measure of my desire to operate, culturally, with and within the digital was also satisfied. In order to more properly satisfy any desire for some digital-media-specific enhancements of my linguistic aesthetic experience I will tend, therefore, to seek out and focus on works that are significant and affective in terms of innovative form, inherently at odds with reading and writing conventions, inherently avant-garde — or at least new and disruptive — inherently less well integrated with the architectures of attention that Big Software is building for the literature that has been gifted to us, historically.
What do eBooks have that electronic literary makers and artists might find formally interesting? Annotation, bookmarking, limited multimedia content, linking, 'social' annotation and 'social' reading, built-in reference tools including access to the web and translation, and perhaps other affordances I have forgotten or will mention later. Listed in this way, this seems like a not inconsiderable list of facilities, all of which, presumably, could be composed and détourned by author-makers who so wished. 'Social reading and annotation,' for example, is a deeply radical inflection of one of our most important cultural practices. eBook platforms, such as Kindle, allow readers of the same e-edition to share and view annotations. By default, this facility is turned "on" for new Kindle readers. The reader's relationships with the text, with the published, with the distributor, with other readers, and so on, is fundamentally altered by network affordances of this kind. The sociology of reading — and thus reading itself — is changed overnight by a technology that does not otherwise pretend to fundamentally alter the practice of reading, in this case, books. But reading, more broadly conceived, has been changed for far more readers, by such social reading, than it was ever changed by, for example, long-form hypertext fiction. And 'shared annotation' is just one of many possibilities for new forms that are entirely within the gift of, in this case, a Big Software architect-distributor-retailer: Amazon. The point is that effective formal innovation is hacked from vectors of commercially implicated flows of network attention. It is not created or even suggested by linguistic-aesthetically motivated authors of reading and writing experiences.
Perhaps this essay seems about to call for something? New ends (and means) for electronic literature, perhaps? It is indeed about to do so, clearly, but not quite so clearly, yet. It follows from what I have already said that I believe researchers and practitioners of ‘electronic literature’ should pay more attention to the forms of vectoralist-controlled delivery media for 'literature' that are, historically, taking the place of physical, codex-bound books. Publishers will cease to print as soon as it is feasible for them to do so. All practitioners of linguistic and literary aesthetics must make themselves sensitive to the media that will constrain the composition of their work and then deliver it to readers. Within the avant-garde and amongst the independent and institutionally patronized literary artistic innovators, greater attention should also be paid to the actually existing and actually evolving culture of reading. There are examples of successful and interesting hybrid engagements. These are often works that intend, more or less explicitly, to reach a larger audience, sometimes reliant on the existing reputation of an author who is also print-published — certain works by Stephanie Strickland, Kate Pullinger, or Brian Kim Stefans — or referencing film narrative and narrative gaming such as Andy Campbell and the Dreaming Methods project. Samantha Gorman and Danny Cannizzaro’s Pry is a particularly fine and important hybrid (Gorman and Cannizzaro 2014). This work is a gorgeous audiovisual, multi-part book, the story of a young demolition consultant, James, whose life has been shattered and reconfigured by the 1991 Gulf War. As the story unfolds for us, James is losing his sight. The electronic literary mechanisms of the work literalize interface gestures and integrates them with the fiction. Readers must 'pry' open James' failing eyes and other conduits for visual experience and textual memory, in order for the story to progress. Delivered by tablet, Pry can be understood by contemporary readers as like an eBook or, indeed, like a digital video or like a game, and so it will be — at one and the same time — read and watched and played. The way that it folds gesture into the act of reading proposes a new form of aesthetic literary experience that is as profound and as well executed as anything in the electronic literary field. But will "pry," as gesture, for example, ever be adopted, as a persistent, widely-understood form of reading, by the (eBook-)reading public at large? This is the type of question that the researchers and artists of electronic literature must always already ask themselves.
Is there something about the contemporary culture of reading, which has not so far been mentioned, and that has emerged with new significance? I believe that there is. At last we come to aurature at the end(s) of this essay. I deliberately left off one of the interesting affordances of contemporary eBooks from my previous list. Many eBooks now have companion audio versions, some of them with the ability to sync across reading platforms.9 Without being able, here and now, to quote hard literary sociological evidence to support this (although I am confident that my impression would be borne out), I would say, anecdotally, that there has been a significant increase in the reading of audio books over the past decade.10 They are ever cheaper to buy, much more numerous and, because of digitization and network delivery, an order of magnitude easier to acquire and manage. In the world of both popular and high literary culture, there has, therefore, been a significant increase in the appreciation of literary artifacts — in their being read I would say — by way of aurality, as opposed to by way of graphic visuality.
At this particular moment in the history of reading, when speaking with people who affirm that they are now 'listening' to increasing numbers of 'audible' books, one often finds that they may not consider themselves to have 'read' the book when they have 'only' listened to it. This is an indicative folk-phenomenological apprehension that can be compared with the often-expressed and likely related sense that one has not read a work of literature when one has only seen a movie (or play) that has been derived from it. But the cases are entirely different. Film is a distinct medium and the text of the work in a film version may be — must be — edited and rearranged. In the case of unabridged audible books, one experiences the entirety and integrity of the text as language, identical, in terms of linguistic idealities, with the printed version. Any prejudice against this being a 'true' or 'proper' reading of the text would demand to be supported by a linguistic philosophical analysis — related to questions of linguistic materiality and ontology — and is likely to prove a function of media-specific, culturally and historically implicated biases.
Coincident with the oxymoronic rise of 'audible literature,' there is the advent and persistent presence of Siri, Cortana, and Google Now. We are coming to realize that these computational entities linked to the 'cloud' — and thence to the research and service infrastructures of Big Software — are now listening to us, and responding with much improved synthesized voices, beginning to approach an acceptable coherence of significance and affect in construable utterance. These voices can also be configured to read out loud from arbitrary texts of our choice on computers and mobile devices housing the aforementioned software agents that the voices ventriloquize. People who nowadays encounter these vocal transactors may begin to understand some part of what has newly become of all the data that they provided and posted, that they have willingly and much too freely given over, not only to market profiling but to the solutionist research institutions of Big Software. Whereas computer voices and ‘text’ generation had remained, until quite recently, feeble, if charming, geekish jokes from the "AI winter," now many of us — I mean many non-specialists — have heard of what 'n-grams' may do for us and for our culture at large and that this is also an aspect of a widespread, ramified, and very pragmatic, commercially-invested engagement with 'natural language processing.'11
With the prospect, in part, of being able to balance out what can only be understood as an invidious commercial overdetermination, a whole new field of technically and algorithmically implicated aesthetic language practice is opening up for just the kind of author-makers who may have been speculating about the ends of electronic literature. Perhaps we will not be able to think of this new field as, strictly, literary practice since its medium is language without the letter. As an applied grammatologist, however — someone who, following on from the insights in Jacques Derrida's Of Grammatology, has reinvested their practices in a poetic making that is attentive to language as such, regardless of material support or originary presence — I would propose that we eschew any unwarranted qualitative linguistic-philosophical distinction between writing and speech, reading and hearing. Language is medium-agnostic, although the human animal, as language co-creator, is not — with regard, that is, to genetic propensities. Regardless, to 'read' is, precisely, to transmute perceptible forms — consisting of any material substance — into language. While — in a human-genotypically induced circularity — it is the bringing into being of language that proves to us that ‘reading’ has taken place.
Aurature is the established practice of civilizing language that will emerge from our evolving material cultural circumstances.12 How and why might the practice of a computationally-implicated aurature be important, apart, that is, from helping to stave off or delay the end of electronic literature? To conclude this essay, I will simply illustrate a few points by way of example, not attempting to draw out the full implications of what is touched on in the following narrative. Nonetheless, the arrival of speaking and, especially, listening networked programmable devices — as a part of the technological and cultural architecture of Big Software — has, I believe, important consequences for literature and for literary — linguistic aesthetic — practices of all kinds.
After Siri and at around the same time that we were introduced to Cortana and Google Now, it became possible to invite Alexa — Amazon's Echo — into our homes, accompanied by much-satirized advertising suggesting that she might even become a kind of family member.13 Alexa can speak and she also — most particularly — listens. If you set her up and leave her in some common room of your home she will listen to everything that she can hear within that space using an array of seven excellent microphones particularly attuned to vocal human language by 'Far-field voice recognition.' Triggered by her 'wake word,' the eponymous "Alexa," she sends everything she subsequently hears — including "a fraction of a second of audio before the wake word" [my emphasis] — to the 'cloud' for processing by Amazon's "Alexa Voice Services."14 The latter is the name for a web-based infrastructure that, in addition to interpreting and responding to human invocations of Alexa herself, will provide an inexpensive service for any hardware manufacturer wanting to add voice recognition, control and vocal feedback to their devices, without having to build these technologies and services themselves. Our mobile digital familiars — especially smart phones and tablets — already surveil us extensively given our more or less silent, passive consent, but they are ours, intimate with us — they seem to be our individual business or problem. I believe that Alexa is the first device that we have invited to enter into our homes and attend to whatever occurs — that its algorithms can linguistically interpret — in these spaces that we may also share with other ostensibly private visitors and without any existing protocol for obtaining their consent to this surveillance, always assuming that this now occurs to us as any kind of a problem. And once ever more devices are enhanced and empowered by the Voice Services of Big Software? Then what? Will everything in the world of human aurality be perfectly surveilled? Interventions will be necessary, if only to help us understand this radical transformation of the social and ideological spaces within which we must live.
Alexa can, with the Alex Skills Kit (ASK), be given new linguistic abilities in the burgeoning world of computational aurality.15 These are called 'skills,' and she exercises them in order to respond to what she — also in the terminology of the Kit — can interpret as vocally expressed 'intents.' Now, today, any of us can program Alexa to recognize and attend to arbitrary, even aesthetic, events of language that she believes to be intended for her.16 And we can make her respond appropriately with utterances that humans may understand, that we can read.
Although Alexa reports her 'birthday' ("Alexa, how old are you?") as her November 6, 2014 release date, I was only able to order and acquire a device as of December 19 later that year. I responded positively to the first advertisement for the Echo that I saw, having been selected (presumably on the basis of algorithmic analyses) as amongst the members of Amazon Prime most likely to be interested in a preliminary and, it appears, experimental offering. The devices were advertised at "half price," $99, to the customers selected, marked down from a putative $199, and Echoes currently (as of November 6, 2015) retail for $179.99. I outline these details for the record and to give some sense of the size of the market that Amazon may imagine for these devices. Given that Alexa will be increasingly easy to integrate with home-automation systems (domotic, in the terminology of Bruno Latour), there must be reasonable expectation that the market will be large. As a kitchen (timing, measurement, and recipe) aid, Alexa works well. She is, of course, a fairly decent voice interface to a number of music libraries (a "listening and talking Bluetooth speaker"). She already "plays" (evokes and controls audio recordings for) audio books and will surely, soon, be developed to read arbitrary pieces of writing (that have not been previously read and recorded by humans) as synthesized text-to-speech. She is already a widely recognized, if simple, AI, as I have mentioned, and she is also thus, of course, an ideal vehicle for the outcomes of the stronger AI research that is simultaneously regaining prominence in many fields. She is a part of what may well be the rise of "humanized" social and domestic robots which are not so much invested in undertaking physical tasks (like those of factory robots) or activities that are dependent on calculated movement. Alexa and her like are focused on information management and interrelation, including transaction with and on behalf of those humans that such robots can sense and identify.17
The current wide-spread usage of the word 'robot' is indicative of a significant shift in our understanding of artificial intelligence. The developed-world imaginary surrounding robots had, until quite recently, embodied them in humanoid forms, with the robots' artificial intelligences inhabiting these bodies in a parody or folk version of Cartesian dualism. Alan Turing disembodied AI but insisted on its relation with socialization and with aspects of identity — his test implicated the determination of gender — that culture and theory now characterizes as constructed (Hayles 1999, xi-xii). Today 'robots' exist on the Internet, in the cloud, and as distributed chiefly to our mobile devices. Thanks to the new Apple TV, we are about to discover that they have also been widely distributed to our televisions, or rather, to our home-entertainment systems. The Amazon Echo — Alexa — is, in a sense, the first plausible re-embodiment of the domestic robot. While, as of the present writing, Apple's Siri is found embodied in and controlling (Apple) TV, and Google rolls out Google Home.
A question that I want to ask here — in a context that engages with aesthetic linguistic practice — is this: given that neither Alexa nor our soon-to-be-robot televisions are visually, morphologically humanoid, then why is that we think of them as robots? The answer, of course, is that they make use of human language. Use of language was the reason that we changed our understanding of "robot" in the first place. Robots, on the internet, are programs that use human language or linguistically framed events to perform transactions, with one another and also with and on behalf of actual humans. Robots read your email and compose ads for you based on what they’ve read. They write, that is, they generate spam. They build websites to redirect your attention. They try to log into your bank account. They tweet. They set puzzles for you, attempting to find out if you are one them or one of us. Because all these transactions are framed by language and because they model linguistically structured human agency, they qualify as the actions of robots. In everyday life they provide us with encounters which, to date, are those experiences closer than anything else we can imagine as actual encounters with robots. These new forms through which networked machinic humanoids have entered into our world, these are why the robot imaginary changed.
Now, Alexa, stands (or sits) independently, apart from any one of us, in her own body. Crucially, she is able to perform most or all of the robot-like actions and transactions that are carried out by her internet-based forebears. She’s connected to them — so-called "socially" — and she uses language in the way that they do. Even more crucially, catastrophically, moreover, Alexa has a voice, a good one, with its own timbre and its powerful suggestion of specific human identity. Siri, or the Siri's rather, with their many possible voices and languages, listening and speaking from our televisions at first, will have all the characteristics and the same quasi-independent standing as Alexa.
Alexa is, for all these reasons, important for the future of language art. She is, I believe, the first robot whom I think and feel I have invited into my home. She suggests to me that some major proportion of the art of language will be made in a new world where the material support for linguistic practice will be as much aurature as it is now literature. The speech recognition and synthetic voices of artificial entities can be composed as aurature, and in media that are widely distributable. Computation and programmability — software — is required for the digital analysis of aurature's raw material, and for the composition of work that is made from its elements. As a medium — of both delivery and composition — it will further establish programming as integral to the predominant practices of linguistic creation, of aesthetic linguistic artifactuality. Concurrently, and as a part of a continuum of practice, a more functional aurature, equally constituted by synthetic language, will attain the cultural significance of literature and displace its prominence, if only because so much non-aesthetic, everyday transaction will also and in the first place migrate to aurature. It is aurature that will bring about the final end of electronic literature. Happily, taking materially distinct practices of writing and reading along with them — 'writing and reading' will enfold all the various practices for the generation and receptive interpretation of aurature — many language artists will intervene and aestheticize an aurature of the future that might otherwise remain constrained and controlled by vectoralist commercialism and Big-Data, Big-Software solutionism.
Aiden, Erez, and Jean-Baptiste Michel. 2013. Uncharted: Big Data as a Lens on Human Culture. New York: Riverhead Books.
Cayley, John. 1998. "Of Programmatology." Mute, Fall, 72-75.
Cayley, John. 2012. "Weapons of the Deconstructive Masses (WDM): Whatever Electronic Literature May or May Not Mean." Revista de Estudos Literários 1 (2 (Literatura no século XXI)):25-56.
Cayley, John. 2016. "Of Capta, Vectoralists, Reading and the Googlization of Universities." In Digital Humanities and Digital Media: conversations on politics, culture, aesthetics, and literacy, edited by Roberto Simanowski, 69-92. London: Open Humanities Press.
Coover, Robert. 1992. "The End of Books." New York Times, June 21,. http://www.nytimes.com/books/98/09/27/specials/coover-end.html.
Coover, Robert. 2008. "A history of the future of narrative." Electronic Literature in Europe, Bergen, Norway.
Dehaene, Stanislaus. 2009. Reading in the Brain: The Science and Evolution of a Human Invention. New York: Viking.
Flusser, Vilém. 2011a. Does writing have a future? Translated by Nancy Ann Roth, Electronic mediations. Minneapolis: University of Minnesota Press.
Flusser, Vilém. 2011b. Into the universe of technical images. Translated by Nancy Ann Roth, Electronic mediations. Minneapolis: University of Minnesota Press.
Funkhouser, Christopher T. 2007. "Le(s) Mange Texte(s): Creative Cannibalism and Digital Poetry." E-Poetry 2007, Paris.
Golumbia, David. 2009. The Cultural Logic of Computation. Cambridge: Harvard University Press.
Gorman, Samantha, and Daniel Cannizzaro. 2014. Pry: Tender Claws.
Hayles, N. Katherine. 1999. How we became posthuman: virtual bodies in cybernetics, literature, and informatics. Chicago: University of Chicago Press.
Hurford, James R. 2014. The origins of language: a slim guide. Oxford: Oxford University Press.
Michel, Jean-Baptiste, Yuan Kui Shen, Aviva Presser Aiden, Adrian Veres, Matthew K. Gray, The Google Books Team, Joseph P. Pickett, Dale Hoiberg, Dan Clancy, Peter Norvig, Jon Orwant, Steven Pinker, Martin A. Nowak, and Erez Lieberman Aiden. 2011. "Quantitative Analysis of Culture Using Millions of Digitized Books." Science 331 (6014):176-182.
Montfort, Nick. 2014. #! [Shebang]. Denver: Counterpath.
Morozov, Evgeny. 2013. To Save Everything, Click Here: the folly of technological solutionism. New York: PublicAffairs.
Simanowski, Roberto. 2010. "Digital Anthropophagy: Refashioning Words as Image, Sound and Action." Leonardo 43 (2):159-163.
Simanowski, Roberto. 2011. Digital Art and Meaning: Reading Kinetic Poetry, Text Machines, Mapping Art, and Interactive Installations, Electronic Mediations. Minneapolis: University of Minnesota Press.
Stiegler, Bernard. 2010. For a new critique of political economy. Cambridge: Polity.
Stiegler, Bernard. 2016. "Digital knowledge, obsessive computing, short-termism and need for a negentropic Web." In Digital Humanities and Digital Media: conversations on politics, culture, aesthetics, and literacy, edited by Roberto Simanowski, 290-304. London: Open Humanities Press.
Wark, McKenzie. 2004. A hacker manifesto. Cambridge: Harvard University Press.