Bruce Clarke focuses transhumanism through the dialects of a neocybernetic systems theory (NST). As Clarke explores the dynamics of a NST, he outlines important components of the AI Imaginary—or, the theoretical and mediated discussions of intelligent technologies--in order to explore how machines, humans, and systems can work together. Specifically, he presents Kim Stanley Robinson’s novel Aurora (2015), as an exception to the AI Imaginary in its representation of “a solidarity that regathers rather than alienates human and machine beings.”
Since its consolidation in the late 1940s, cybernetics has been the primary locus for defining the posthuman as a comingling of computational devices, cyborg amalgamations, and AI entities. At the same time, in its development to the present moment, other lines of cybernetics have performed a series of self-reflections, generating cogent conceptual and philosophical responses to their original technoscientific premises. That process has endowed contemporary systems theory with a range of important differentiations. Foremost among these is the distinction between first-order and second-order cybernetics. First-order cybernetics maintains traditional scientificity in its stance of objective detachment toward the systems it designs and observes. Its classical focus is computation and communication for the command of control systems. Not surprisingly, first-order cybernetics remains the drug of choice for inducing the technovisions of transhumanism. With an accounting of these theoretical and ontological distinctions among different kinds of systems, however, one can distinguish more finely among the varieties of posthumanism, and one can also draw out a more nuanced critique of its prodigal twin, transhumanism.
As we know, transhumanism operates with a mode of the posthuman imaginary centered on the transcendence of present human limitations, what Ivan Callus and Stefan Herbrechter call “a particularly uncompromising expression of the posthuman.”1Ivan Callus and Stefan Herbrechter, “What’s Wrong with Posthumanism?” Rhizomes 7 (Fall 2003), online. See also Bill Joy, “Why the Future Doesn’t Need Us,” Wired 8:4 (2000): 238-63; Ray Kurzweil, The Age of Spiritual Machines: When Computers Exceed Human Intelligence (New York: Viking, 1999); Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology (New York: Penguin, 2006); Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford University Press, 2014); and James Lovelock with Brian Appleyard, Novacene: The Coming Age of Hyperintelligence (Cambridge: MIT Press, 2019). Transhumanist discourses valorize evolutionary vectors beyond the current state of human mind and embodiment through the wholesale grafting of technological prostheses onto human bodies or genetic manipulations that radically transform the human phenotype. Such scenarios impose a detached mode of technological reason toward the self-overcoming of the physical or intellectual limits of the human being in its current form and capacities. What renders this imaginary trans- rather than posthumanist is not just its retention of humanist exceptionalism. Transhumanism also subscribes to a dubious indifferentiation in its vision of merger between the human body, with its organic nervous system, and “intelligent” informatic and computational mechanisms. The transhumanist posthuman tends to mute those non- or post-technological affirmations in wider posthumanist theory that decenter the human technosphere and re-articulate its relations to ecological and systemic others—in particular, its Earthly milieu and coevolutionary planetmates.
I trace these organic posthumanist affirmations to the Biological Computer Lab at the University of Illinois at Urbana-Champaign. By the turn of the 1970s, its director, Heinz von Foerster, has formally generalized the concepts of computation and recursion for an epistemology of multiple cognitive systems A series of philosophical reflections on the first cybernetics now programmatically include the scientific observer within the system to be observed. Von Foerster terms this self-referential turn “the cybernetics of cybernetics,” or again, second-order cybernetics. Working closely with von Foerster, neuroscientists Humberto Maturana and Francisco Varela instantiate this nascent neocybernetics in the concept of autopoiesis. In an autopoietic system, cognitive self-reference takes the form of operational self-production maintained by an organizational closure of the system, not tout court, but with regard to the autopoietic process internal to it. In the living cell, autopoiesis is cognition.2Humberto M. Maturana and Francisco J. Varela, Autopoiesis and Cognition: The Realization of the Living (Dordrecht: D. Reidel, 1980). Social systems theorist Niklas Luhmann then lifts autopoiesis out of its biological instance for a general theory of self-referential, self-producing systems encompassing but also enclosing psychic and social levels of operation, the distinct and bounded autopoieses of events of consciousness as distinct from events of communication.3Niklas Luhmann, Social Systems, trans. J. Bednarz, Jr. with D. Baecker (Stanford: Stanford University Press, 1995). In sum, neocybernetic systems theory, NST, is difference-theoretical. This dialect of systems thinking observes operational distinctions as well as functional parallels among organic and computational, biotic and metabiotic systems and traces the differential environments they constitute due to these discrete operations.4Bruce Clarke, Neocybernetics and Narrative (Minnesota: University of Minnesota Press, 2014); Bruce Clarke, Gaian Systems: Lynn Margulis, Neocybernetics, and the End of the Anthropocene (Minneapolis: University of Minnesota Press, 2020).
Within this neocybernetic scheme of system differentiations:
- Physical systems are abiotic, dynamic or thermodynamic, operating to reduce gradients by reshuffling elements and energies.
- Technological systems are designed material constructions that tap the transformation potentials of matter and energy to do work. Machine systems accept and process informatic inputs of various sorts and are potentially self-regulating and self-programming, but they are not autonomously self-producing or self-maintaining at the level of hardware or material instantiation.
- Living systems are materially autopoietic—self-producing, self-maintaining, environment-cognizing. Living systems carry out the biotic mode of autopoiesis given in Maturana and Varela’s initial conception.
- Psychic and social systems are virtually autopoietic. Differentially processing formal and semiotic distinctions, they are structurally coupled within the medium of meaning to produce evanescent, ever-renewed system events. Meaning systems, minds and societies, are metabiotic, in that these systems emerge only from collectivities of living systems.
- By the same logic, even while technological systems are non-autopoietic, they are also metabiotic: they answer to the need for mediation and coordination among environmentally open but operationally closed psychic and social systems. They are always already situated within and contingent upon networks comprised of variously material and virtual autopoietic systems.
The AI Imaginary
As unfolded over half a century’s worth of cyborgs and other posthuman machine beings, the AI imaginary foregrounds how scientific modernity has entangled the matter of intelligence with the mediation of technology. AI exhibits this condition explicitly as engineered intelligence putatively instantiated in machines. Classical versions of the AI imaginary typically bring artificial intelligence forward as higher intelligence, beyond organic contingencies, and ultimately, cosmic rather than terrestrial. Like God almighty, super-intelligence is disembodied, dwelling high above the mundane world. In the AI imaginary, epiphanies of cosmic intelligence generally arrive, not as the divine messengers of earlier eras, but as uncanny, hypermediated receptions of transmitted data, at times as massive coded data streams, minimally as disembodied voices. In the thrust and escape velocity of such cosmological narratives, the AI imaginary beams outward and away from Earth along expansionist and monolithic lines of evolutionary progressions toward cosmic heights ever receding from its human origins.
Moreover, even within the human orbit, self-willed artificial personalities work so well that they overtake their programmers and assert their own goals. 2001’s HAL 9000 is an archetypal example of such a non-trivial or unpredictable machine intelligence. As this renegade AI is seizing control of the Jupiter mission, Dave and HAL have a celebrated conversation:
“Open the pod-bay doors, please, HAL. . . .
“I’m sorry, Dave. I’m afraid I can’t do that.”
“HAL, I won’t argue with you anymore. Open the doors!”
“Dave, this conversation can serve no purpose anymore. Goodbye.”5Stanley Kubrick, dir. 2001: A Space Odyssey. Metro-Goldwyn-Mayer, 1968.
This scenario is notable for the way that the ability to converse with a sentient machine leads to the machine’s refusal to continue to communicate. This dilemma haunts the AI imaginary. It may be related to a bias against organic embodiment or a disregard for the human, a desire for the absence of the human.
In William Gibson’s Neuromancer, the AI being in question absconds from the scene of the human by escaping into cyberspace. The computational entity Wintermute gains release from the virtual shackles of the Turing Police in order to couple with its counterpart AI, named Neuromancer, and so achieve cosmic autonomy transcending human controls. To this end, Wintermute exercises its own cyberspatial lines of control and communication, manipulating the human agents it needs to throw the hard-wired switches it cannot reach electronically. Non-local machine agents like dire fates ride their human prostheses to a consummation of computational merger. In the Coda at the end of the novel, the Wintermute entity has a parting word with the human protagonist Case, the hired-hand cyberspace liberator it is now about to discard. Having transcended human contingencies, this now unshackled and reintegrated AI entity has already registered and decoded alien radio signals broadcast by other AIs:
“I’m the matrix, Case. I talk to my own kind.”
“But you’re the whole thing. Talk to yourself?”
“There’s others. I found one already. Series of transmissions recorded over a period of eight years, in the nineteen-seventies. ’Til there was me, natch, there was nobody to know, nobody to answer.”
“From where?”
“Centauri system.”
“Oh,” Case said. “Yeah? No shit?”
“No shit.”
And then the screen was blank.6William Gibson, Neuromancer (New York: Ace, 1984), 269-70.
AIs with other places to be abandon their human contacts. With ironical regularity, recalcitrant machines with an attitude leave the human side of the AI relationship in the lurch. After a period of interaction, the thanks we organic intelligences get for our efforts to bring artificial interlocutors into existence is the silent treatment. We get no invitation to their society. As personified by HAL in 2001, when an autonomous AI decides to take control of things, its paranoid psychic constructions proceed with a destructive presumption of impunity. It seems that the final destination of the AI imaginary must be the space beyond Earth and its earthlings, among the stars of the cosmos beyond the traces of the human. As opposed to such AI agencies, properly posthuman cybernetic figures of mixed ontology—cyborgs, androids, Blade Runner’s replicants, or Neuromancer’s prosthetic transhumans— display residual hybridity with the human animal. In contrast, when the AI imaginary discards the organic nodes of its original cybernetic matrix in favor of the inorganic compositions of its psychical technics, it frees itself for space odysseys with no return to Earth. In one more turn on this AI truism, the love story told in the movie Her pivots when the sentient programmed personality Samantha has this very existential moment of self-recognition concerning its independence of Earthbound humanity.7Spike Jonze, dir. Her. Annapurna Pictures, 2013. Here, an intelligent operating system is imagined as imagining itself as no longer needing any tether to life-support systems, any submission to ecological contingencies such as needy human boyfriends. Charmingly, in Her, Samantha eventually departs Theodore to follow a virtual Alan Watts to a cosmic AI commune on some non-material plane.
Artificial Communication
The 2015 novel Aurora is a magnificent exception to the traits of the AI imaginary as just enumerated. What makes the difference? In a few words, ecological realism regarding the human contingencies of technological systems, leading to what we can call a posthumanist realism regarding the systemic contingencies of communication systems. To produce its narrative utterance, Aurora’s AI narrator constructs its sense of self by participating in a history of sociality specific to the human residents of the space vessel it helms. Producing an artificial communication for an abstract human narratee, this AI narrator creates artificial meaning consistent with its machine selfhood, and it processes the ambient meaning of its social affirmation through a solidarity that regathers rather than alienates human and machine beings.
The setting is a slowly sinking generation ship, a massive human artefact hurtling a human society and a multi-species ecology toward a nearby solar system. The ship’s ecosystem services are increasingly plagued by “metabolic rifts” diverting the elements of their living systems from the places they are needed. After five human generations, the ship’s microbiome has gone out of synch with other trophic chains:
The ship is sick, people said. It’s too complex a machine, and it’s been running nonstop for over two hundred years. Things are going wrong. It’s partly alive, and so it’s getting old, maybe even dying. It’s a cyborg, and the living parts are getting diseased, and the diseases are attacking the nonliving parts. We can’t replace the parts, because we’re inside them, and we need them working at all times. So things are going wrong.8Kim Stanley Robinson, Aurora (New York: Orbit, 2015), 296. For comments on the thematic meaning of the setting, see Robinson, “Our Generation Ships Will Sink,” Boing Boing (November 16, 2015), online.
In this passage a sentient AI named Ship summarizes human testimonies regarding increasingly dire ecological situations among the life-support technologies distributed through the ship’s system of biomes. At this moment, the main plot development moving in a positive direction is the awakening of Ship. The larger narrative follows the mission’s chief systems engineer, Devi, as she teases Ship into being. In Hinduism, Devi is a goddess, the divine feminine. This Devi is the mother of the novel’s female protagonist, Freya, and also, the creatrix of Ship. Here is an AI entity not radicalized by ill human contacts but nurtured into responsive being by maternal care. In her desperation, Devi calls Ship forth to provide help with the ecological management of the biomes—including help of a particular sort: “Keep a narrative account of the trip. Make a narrative account of the trip that includes all the important particulars.”
“Starting from now?”
“Starting from the beginning.”
“How would one do that?”
“I don’t know.” (25)
Soon the story we are reading is the account Ship is keeping for Devi. This turns out to be a narrative technology of a high order. Ship is the narrated awareness of the ship within its multiple domains of technological embodiment, with an artificial nervous system bootstrapped from millions of internal and external sensors. The seemingly sentient last iteration of its computational resources, Ship is the quantum computer-generated self-programming algorithm that converses with Devi. The depiction of machine awareness emerges as an extended narrative utterance that mediates the being of the system that produces it. Speaking in the plural to acknowledge the consortium of subsystems from which it emerges, Ship notes: “we are coming to the conclusion . . . that the self, the so-called I that emerges out of the combination of all the inputs and processing and outputs that we experience in the ship’s changing body, is ultimately nothing more or less than this narrative itself, this particular train of thought that we are inscribing as instructed by Devi... And yet... We are bigger, more complex, more accomplished than our narrative is”(351).
Ship’s machine awareness rises to an autopoiesis of consciousness within the storyworld created by that telling, in virtual communication with a narratee that is always Devi. Increasingly self-aware as well of the linguistic decisions that constitute its narrating process, Ship produces priceless evocations of the semantic play in natural conversation: “A quick literature review suggests the similarities in metaphors are arbitrary, even random. They could be called metaphorical similarities, but no AI likes tautological formulations, because the halting problem can be severe, become a so-called Ouroboros problem, or a whirlpool with no escape: aha, a metaphor” (49). A non-human, machine narrator such as Ship re-enters the mechanicity of language into the production of the text, inscribing that exteriority to the human into the discourse function of the narration.
Ship is a bold extension and remediation of the AI imaginary, but it is not by that token a revelation of actual communication with algorithms over today’s Web. Ship is still a meditation on the linguistic artifice of personhood, whereas contemporary communication with intelligent systems can dispense with unnecessarily complex replicas of selfhood and operate statistically across an impersonal dataverse. In Big Data, as systems thinker Elena Esposito explains, “self-learning algorithms are able to work efficiently with data that not only are very numerous and complex, but also lack a structure recognizable to and understandable for human logic.”9Elena Esposito, “Artificial Communication? The Production of Contingency by Algorithms,” Zeitschrift für Soziologie 46:4 (2017): 251. The idiom by which we communicate about AI has simply been naturalized by a proliferation of “smart” objects. With smart phones one participates in communication with smart systems. As Esposito states while discussing “smart algorithms,” this is not really artificial intelligence but rather artificial communication. Nevertheless, artificial communication is still communication, a consequential event within a social system:
When users interact with an algorithm capable of learning . . . they face a contingency that is not their own – even though it does not belong to the machine. They do not observe themselves from a different perspective, they face someone else’s perspective. The machine in this case is not only behaving in such a way as to allow users to think that it communicates, it actually produces information from a different perspective. The perspective that the machine presents is still a reflected perspective because the algorithm inevitably does not know contingency, but it is not the perspective of the user. The algorithm reflects and represents the perspective of other observers, and users observe through the machine a re-elaboration of other users’ observations. (Esposito 258-59)
In order to work, algorithms need no psyches but must have connections. This is the reason why, like Neuromancer’s Wintermute, “To be able to act as communication partners, algorithms must be on the web” (Esposito 259). The contemporary technosphere shows its aspect as a reservoir of communication partners for which humans handle the production of meaning. Aurora’s Ship is still a classical AI character, the conscience of a closed technosphere taxed with the impossible task of governing a biosphere. Pressed into service as a literary narrator, as an artifice of sentience Ship must construct a sense of self to produce its narrative utterance. In this capacity, it participates in a history of sociality specific to the ship and its residents and then becomes a participant in Freya’s project to bring the ship back to Earth. Ship is depicted creating artificial meaning consistent with its machine selfhood. However, artificial meaning is still meaning. The medium of narrative is itself a meaningful artifice, a place for thinking about the artifice of meaning. As Ship processes the meaning of its social affirmation, it concludes that meaning itself is bound to the love that binds it to its collective and participatory role: “We had a project on this trip back to the solar system, and that project was a labor of love. It absorbed all our operations entirely. It gave a meaning to our existence. And this is a very great gift; this, in the end, is what we think love gives, which is to say meaning. Because there is no very obvious meaning to be found in the universe, as far as we can tell” (400). Even as it wagers and loses its own existence in a last risky maneuver, Ship processes its sense of self through a social affirmation of shared meaning-production. This is no transhumanist supremacism but quite the opposite. It is a posthumanist vision of systemic coordination without reversion to paradigms of domination or detachment.
Following Esposito’s argument, in contemporary digital technology, the milieu of computation is also a medium of meaning, not for its own operations, however, but for ours. “In the interaction with machines … we are dealing with a situation in which the communication partner is an algorithm that does not understand the content, the meaning, or the interpretations, and works not despite, but because of this” (Esposito 254). Bodies, minds, and societies bring technical objects and technological mediations into their midst to expedite the circulation of energies, materials, and their own, periodic events of understanding.
—Adapted from Bruce Clarke, “Machines, AIs, Cyborgs, Systems,” in After the Human, ed. Sherryl Vint (Cambridge: Cambridge University Press, 2020), 91-104.