Paisley Livingston on Stanislaw Lem and the history and philosphy of Virtual Reality.
The technologies and speculations associated with "virtual reality" and cognate terms (such as "cyberspace") have recently made it possible for scores of journalists and academics to develop variations on a favorite theme - the newness of the new, and more specifically, the newness of that new and wildly different world-historical epoch, era, or Zeitgeist into which we are supposedly entering (and on some accounts, have already entered) with the creation of powerful new machines of simulation. The innovative powers of the machines of virtual reality are so extensive, it would seem, that they are even supposed to be able to achieve the extraordinary feat of revitalizing that tired journalist genre, "gee-whiz" scientific reporting. "Gee whiz," one can now read, "you just put on a data glove and don the head-mounted display helmet, and step right into a whole new world where the old reality - and even the tired, old-fashioned notion of reality as such - gets replaced by the non-existent reality simulated by the machine. You can fight battles and have sex with people who aren't anywhere near you, or who never even existed. Why you can actually, I mean really, interact with an illusion!"
Perhaps I am unfair to the journalists who have taken up the topic of recent developments in artificial intelligence and computer simulation, for such writers typically go on to qualify the picture by pointing to the many crucial limitations of the newest devices. Yet the epithet "gee-whiz" is no caricature of the excesses of certain academic critics who, having already been swayed by postmodernist prophecies of a culture of simulacra, were quick to seize upon the technological pretext provided by data gloves and interactive programs.
[ link to Aaron Pease's 2003 review of Manual DeLanda's philosophy of the virtual]
Fortunately, however, not everyone will be convinced by sweeping claims about virtual reality. A badly needed element of sobriety has recently been introduced by that foremost of futurologists and speculators, Stanislaw Lem. In an essay entitled "Thirty Years Later," Lem speaks directly to the issue of the bedazzling "newness" of the idea of virtual reality, pointedly debunking some of the more ridiculous and exaggerated claims. Citing extensively from his 1962 Summa Technologiae, Lem establishes quite convincingly that recent talk of "virtual reality" was clearly anticipated by his own discussion of "phantomatics." Lem rightly bemoans the cultural amnesia of the current writers who display an appalling lack of discernment with regard to a value that clearly ranks most highly in their scheme of things: newness. Just how new, after all, is the idea of a device capable of generating an illusion indistinguishable from reality? Lem's term for such a device, "phantomatics," was coined 30 years before there was any talk of cyberspace or virtual reality. And although Lem does not broach the issue, it is not impossible to find earlier evocations of the topic which correspond, in different ways, to aspects of the current speculation over virtual reality. One thinks, for example, of the captivating cinematic illusion described in Georges Duhamel's anti-modernist tract, "Scènes de la vie future," Adolfo Bioy Casares's La invencion de Morel, and the remarkable devices conjured up in several generations of philosophical thought experiments in which mad scientists manipulate brains in vats. Looking back even further, we must acknowledge that an important precursor to both virtual reality and Lem's phantomatics is Plato's mythical cavern, the shadows of which have a strong claim on being one of the earliest virtual realities in the history of Western thought.
Another theme in Lem's essay concerns the status of his own previous pronouncements. Here we have a rare case of successful futurology, he claims, for his prognostic of 1962 has been amply confirmed by recent developments. At a time when only Lem's readers know what "phantomatics" means, books on virtual reality pullulate like rabbits, and the machines themselves can be purchased by anyone having the financial means. Lem tells us that today's machines can simulate a helicopter flight to Seattle with such astounding accuracy and vividness that the virtual passenger might be compelled to reach for and use the real airsickness bag placed at his disposal. "Does not this suffice to prove my predictions accurate?" (16), Lem asks. Lem's insistence on saying "I told you so" is not a matter of unmotivated vanity: not only does he modestly point out ways in which his earlier discussion of phantomatics requires correction, but he also establishes the polemical context within which it is pertinent for him to underscore his 30-year advance on current discussions, namely, the context of the negative reception to which his Summa was subjected at the time of its publication. More specifically, Lem is eager to point to the errors of Lezlek Kolakowski, who in an early review of the Summa, castigated Lem for making basic philosophical errors and for confounding fairy-tales with information. According to Kolakowski, Lem's essential problem was his failure to share Maurice Merleau-Ponty's insight that the findings of modern science have absolutely no implications for the truly philosophical questions. Against this idea, Lem has consistently asserted that philosophers can only ignore science and technology at the cost of absurdity. Having arrogantly asserted that his criticisms of Lem's prognostic were "decisive," Kolakowski, for example, can now be shown to have demonstrated an unacceptable form of philosophical dogmatism, for it has in the meanwhile become possible to provide an empirical confirmation of Lem's forecasts.
Lem is right to castigate Kolakowski for his rigid critical response to the Summa. I agree with Lem that a philosopher's argumentations about purely conceptual truth can only place the broadest constraints on issues concerning empirical fact (assuming, then, along with Lem, that some version of the analytic/synthetic distinction can be defended so as to make it meaningful to continue distinguishing between matters empirical and conceptual; roughly, then, a statement is analytic just in case its truth is determined solely by virtue of the meanings of the words used to express the statement). When it is a matter of asking what will be technologically possible in the future, a philosopher's only properly philosophical tools are logic and semantic analysis, which means that he or she is essentially restricted to asking whether the futurologist's scenario is meaningful and coherent one, and hence logically possible. To proceed any further, philosophers must draw on whatever they take to be synthetic or empirical truths, the production and assessment of which is hardly the exclusive province of philosophy, especially when it is a matter of the advance of technology. Qua philosopher, Kolakowski possessed no privileged critical capacity and was in no position to deliver a decisive judgement of Lem's speculations, which is why Lem is right to mock the notion of infallibitas philosophica.
The situation is not, however, as simple as Lem suggests. It does not follow from the points I have just made that Lem's speculations about phantomatics truly have been confirmed by any of the recent developments in virtual reality technology. Not only do those speculations not constitute any single and univocal thesis susceptible to a straightforward empirical test - as Lem misleadingly implies in his haste to score points against Kolakowski - but the empirical basis of the putative confirmation is itself not a simple and immediate fact. Rather, the term "virtual reality" today designates a complex constellation of devices and claims which are themselves in need of a careful assessment. Neither the data glove nor the illusion of a helicopter ride which it helps make possible provides any simple or compelling proof of the accuracy of Lem's fascinating ruminations on phantomatics in 1962 - or so, at least, I shall argue in what follows. My strategy will be straightforward: I try to clarify and elucidate the different theses that may reasonably be associated with Lem's phantomatics, singling out those most susceptible of the kind of empirical confirmation the author is eager to present at Kolakowski's doorstep. Only then can we usefully ask whether Lem's claims really have been confirmed.
What claim or claims, then, were most central to Lem's phantomatic prognostic in 1962, and what were his later claims about these claims in 1991? Lem tells us that "phantomatics" was his name for the method used to connect an individual's entire sensorium to a computer which could artificially induce sensations indistinguishable from those experienced in ordinary conditions. Lem states that when he made his forecast about the phantomatic machine, he simply "assumed" that it would be "possible" to produce stimuli indistinguishable from those of ordinary perception. He goes on to add that "in the meantime this has already happened, but as Rome was not built in a day, there are still some imperfections for the time being" (14). As I find Lem's point here fairly subtle - and ultimately equivocal - I think it important to spell out some of the different theses that may be read out from his phrases, along with the premises behind them.
A key feature of Lem's phantomatic thought experiments is his emphasis on an artificial stimulation of a purely sensorial illusion. As is common to all forms of empiricism, he assumes that it is correct to distinguish between the agent's thoughts and the agent's perceptions, and he stipulates that the phantomatic machine's input can only directly influence the latter. Indirectly, the illusory device and its programmers may get the subject to have any number of correct or incorrect thoughts, which indeed is the whole point of his prognostic, but Lem assumes that the ultimate locus of control of cognition (and volition) remains in the agent's mind/brain, not in the machine. For example, the machine can cause the agent to experience all of the perceptions he or she would experience standing on the observation deck of The Eiffel Tower, but the machine cannot directly cause the agent to think or believe "I am now on the Eiffel Tower." What is supposed to make it difficult for the subject to break or unmask the perceptual illusion by means of the exercise of his or her independent thought and volition is the additional stipulation that a feedback loop connects the agent's bodily activity to the perceptual stimulation provided by the machine. If I raise my hand to see whether the tower's iron railing is really there, the program responds to my gesture by providing all of the appropriate perceptions. Lem's speculation, then, is that if the machine's manipulation of my perceptual field is good enough, my thinking, although not directly manipulated, will be confounded, for I will finally be unable to justify any decision concerning the artificial or non-artificial status of my perceptions. I shall return to the question of the cogency of this assumption below.
Given these common threads, we may now go on to look at some different ways of construing Lem's 1962 thesis: T1 and T2 are different in important ways. T1 is not susceptible of empirical confirmation (or falsification) and can only be assessed in terms of its logical validity, which is presumably a matter of asking whether the different propositions it includes could all be true at the same time. If one is persuaded that no contradiction lurks in the concepts jointly employed in projecting such a scenario, it would seem reasonable to assent to T1. There is much more to be said, of course, on this subject, but Lem's insistence on a recent empirical confirmation of his forecast suggests that we should turn to T1 's more robust alternative, T2.
T2 includes T1 's clause about logical possibility and hence must pass the same conceptual test. But it also requires the phantomatic machine to be compatible with the laws and facts of physics, biology, and other relevant bodies of knowledge. Assuming that it is, T2 adds that at some point in time human technology will be capable of realizing the phantomatic device. The best way to show that the latter clause - and with it, the others - has been satisfied is to identify an actual instance of the successful operation of such a device.
T1 : It is logically possible (i.e., not incoherent) for a machine to produce a perfect perceptual illusion (defined as a situation in which someone is made to experience a set of sensory impressions and is incapable of recognizing their artificial origin).
T2 : It is now logically, physically, biologically, etc., possible to produce a perfect perceptual illusion (as defined in T1 ), and one day it will be technologically possible to produce a machine capable of inducing such an illusion.
Now, Lem's statements in his recent essay go very far towards explicitly asserting that today's virtual reality devices actually confirm the more robust of the two theses, T2 . But he qualifies this strong claim by adding that the devices only imperfectly realize the scenario, a qualification which is then partially taken back with the suggestion that the confirmation will eventually be perfect. One wonders how Lem could think that today's machines, some of the limitations of which he describes very well, could in any way - even imperfectly - confirm T2 . Lem himself points out that given the state of the art, real-time reactions to the subject's manipulations of the data glove require a drastic simplification of the simulated perceptual field, which is easily recognized as artificial (18). Yet he looks forward confidently to a resolution of this problem along the lines of high-resolution television.
Although Lem nowhere says so, it is important to note that this clause already vitiates the idea of there already having been a literal empirical confirmation of T2 . What is more, Lem allows that an even greater difficulty waits on the horizon: it is technologically impossible today to have the machine simulate the perceptions that would accompany a conversation with any linguistically and psychologically competent human interlocutor. Today's machines will all fail this extended Turing test. Lem is admirably honest in pointing out a deficiency in his 1962 thinking on this score: having been overly sanguine about the AI research programme, he had assumed that the only truly difficult task would be the modelling of genius. The behavior of ordinary people capable only of stereotypical and/or unoriginal thoughts and utterances could be successfully mimicked, in which case the only efficient test would be to seek out and converse with a genius!
Given these concessions, it would seem that T2 requires modification, especially if Lem's thesis is to have been confirmed by present-day AI and VR technology. One approach would be to add clauses limiting the range of the machine's illusionary capacities:
T3: It is now logically, physically, biologically, et cetera, possible to produce a perfect perceptual illusion, and one day it will be technologically possible to produce a machine capable of inducing approximations of such an illusion with regard to some of our perceptual experiences (i.e., only audio-visual experience of some nonhuman phenomena).
Suitably pared down in this manner, it becomes plausible that Lem could assert that his 1962 thesis has effectively been confirmed by recent developments (given that we generously grant Lem his high-resolution television clause). Yet there is a certain tension between the two main clauses of T3: on the one hand, there is a very strong opening assertion about the scientific possibility of a radical illusion, and on the other, there is a much weaker thesis about its partial technological realization. When what one hears and sees can be contrasted to what one touches, tastes, and smells, and when the audio-visual illusion, no matter how lifelike and persuasive, excludes all interaction with other agents, the phantomatic device ceases to be the startling and unsettling philosophical engine that both Lem and the postmodernists like to talk about. After all, most of Lem's speculations concern a problem of global indiscernibility, i.e., an agent's inability to know whether the phantomatic experience has ended - or whether its apparent ending and the subsequent resumption of ordinary life was not itself a continuation of the machine-induced illusion. No such problem - and none of the world-historical cultural transformations evoked by Lem and others - arises with regard to the putative confirmation of T3.
Is T3, then, the best formulation of Lem's 1962 thesis - to which one could add the 1991 thesis?
T4: T3 was an original prediction that has now (almost) been confirmed.
Perhaps, but it must be acknowledged that the textual evidence in Lem's essay does not univocally support such an interpretation. Lem in fact oscillates between stronger and weaker claims, sometimes concluding that the subject can identify the phantomatic illusion, other times asserting that the phantomatic device leads to a solipsistic situation in which the subject can trust no one, since any information proffered by others could turn out to be another manifestation of the machine's strategic and deceptive operations. In regard to this line of thought, one wonders what has become of Lem's concessions about the limits of AI modelling of human intelligence and linguistic behavior. As if in response, he offers this analogy: current day phantomatics are like computer chess programs; today they cannot beat the best human players, but one day they will beat everyone. Such an assertion suggests that Lem has gone back to defending a much stronger thesis - something like T2 - in which case he would have to abandon the claims about the current empirical confirmation of his 1962 thesis (precisely the tool he uses to bludgeon the philosopher, as well as the main theme of the essay). But in a key passage, we are given this concession: "Certainly the 'phantomatic abduction' is today nothing more than a possibility" (20). It would appear that the confirmation of his 1962 forecast has been postponed to some unspecified future date.
Lem's essay hovers between a dramatic yet unverified thesis, T2, and a far less dramatic, and imperfectly verified thesis, T3. Is there no way of finding an intermediary view, strong enough to warrant the drama, yet at the same time at least intuitively close to the thrill occasioned by actual VR devices? A problem one encounters here involves an important difference between Lem's 1962 phantomatic scenario and his descriptions of current VR experiences. Someone who dons the helmet and glove and begins to interact with a simulated environment is using his or her organs of sight and hearing to perceive images and sounds produced by a machine. This is quite different from Lem's earlier idea of a direct nerve stimulation of perceptual data, along the lines of the brains in a vat controlled by the mad scientist in one of Lem's stories. The difference is not trivial insofar as it corresponds to a rather significant and implausible exaggeration of the technological control of experience on the one hand, and to a radical impoverishment of such control on the other. It is not hard to decide which of these two extremes has not received any empirical confirmation.
Should we wish to abandon T3 in favor of more dramatic and as yet unconfirmed phantomatic hypotheses, another key issue remains to be confronted. As I have stated above, Lem's notion of phantomatics requires a sharp division between the agent's perceptions and cognitions, with directly technical control being exercised only on the former. Whether this is a biological and psychological possibility (quite aside from a technological one) is, I think, an issue of no small complexity, but it should be noted that such an assumption is essential to both T2 and T3. Lem has not confronted this issue in his phantomatic speculations, but it strikes me that in some of his fiction - most notably Eden and the conclusion of Fiasco - he vividly evokes the plight of agents who lack the concepts needed to interpret unfamiliar perceptions. Inverting this formula, we may challenge the phantomatic scenario as defined in T2 and T3 by asking how familiar and convincing percepts could be produced in the absence of the appropriate concepts. Such an argument could lead us to the conclusion that Lem's perfect phantomatics are not only technologically unrealized, but psychologically impossible.
To explore this line of argumentation, we may ask whether an agent's sensory perceptions are uninformed, both synchronically and throughout an entire history of learning, by theoretical constructs and higher cognitive processing. And if they are, which seems unlikely, could an external modelling of perceptual inputs have a chance of being indiscernible given the likely non-identity of the modeller's and subject's conceptual networks? Can we really anticipate and build an effective model of how everything (or even anything) would look, feel, taste, and sound to someone who has a lifetime experience, not just of "perceiving," but of cognitively informed perception? To make this kind of objection somewhat more tangible, we may contrast a number of different situations:
(1) Walking into an unfamiliar room, an agent discerns a long, low table against the wall. There is a compact disk player on it, but the agent cannot readily identify this thing, which looks to the subject like a black box. (2) Walking into the same room, another agent sees the object on the table and conjectures, "Must be a stereo amplifier," wrongly identifying the box as an instance of a familiar kind of thing. (3) Walking into the same room, a third agent, who is familiar with CD players, immediately recognizes the device on the table, thinking, "That's a CD player." (4) Walking into the same room, a fourth agent, who is familiar with this CD player's history, recognizes the device on the table and thinks "That John's old CD player, which must have just come back from the shop. It's still got the long, ugly scratch little Tommy made on it."
The first two agents' failures to see that the black box is a CD player is not a result of some purely perceptual deficiency, for the perceptual conditions in these different cases are, ex hypothesi, identical. More generally, the differences between (1) seeing x, (2) mistaking x as q, (3) seeing that x is a p, and (4) seeing that x is some particular p depend on the different agents' cognitive processing and background concepts. These differences are not entailed by the perceptual input alone.
Cognitive seeing applies concepts to percepts to produce an act of recognition (or misrecognition). The successful phantomatic program must model this process so that the percept of x which it induces in the subject will be such that the subject thinks that he or she sees this x as an instance of some kind of p. Imagine a phantomatic scenario in which an agent must be provided with sensory data simulating the experience of visiting a friend's house. The program must provide percepts that seem appropriate to this agent, which means that the percepts or sensory input must prompt the right sorts of categorial recognitions, e.g., the friend John's CD player. It is possible that in relation to various trivial cases, a standardized program of perceptual specifications will do the trick, but it also seems highly likely that in regard to a large number of crucial cases, only a more specialized, and perhaps even a highly idiosyncratic program of sensory inputs could suffice to trigger the appropriate responses. For example, the standardized perceptum of CD players - even one fine-grained enough to account for different makes and ages of CD players - will not have the particular scratch that the agent in scenario (4) rightly expects to see on John's device.
It could seem that the way for a phantomatic programmer to get all of these sensory data "right" is "simply" to do a very good and exhaustive job of encoding the perceptual facts of the real world. Having encoded the way John's CD player really looks, the phantomizer can make the agent of scenario (4) think he is seeing this device. On the other hand, to fool the agents of scenario (3), (2), and (1), any realistic or "generic" CD player perceptum will suffice. But the difference between case (4), on the one hand, and (3), (2), and (1), on the other, implies that the phantomatic machine's inputs must be designed so that they will be appropriate to agents whose conceptual backgrounds are not identical, which means that what is required is not "simply" an encoding of perceptual facts about the world, but also an accurate representation of the diverse memories and belief systems of the agents to be phantomized.
Another scenario can serve to drive this point home. Imagine a case where someone wrongly believes that his friend John has an old, scratched CD player. Visiting the house, that is what he expects to see. It follows that what the phantomizer must produce to fool such a person is not an accurate perceptual display of the contents of John's house, but one attuned to the visitor's faulty mental map of what the house is believed to contain. To phantomize someone successfully, the program would have to include a complete and accurate representation of all of the agent's relevant thoughts and beliefs, a task of monumental proportions. One may as well conclude that mindreading is required to make Lem's phantomatic illusion possible! Mindreading may be logically possible (i.e., not incoherent), but a lot of well-entrenched science strongly suggests that it is physically, biologically, and psychologically impossible. If that is the case, then T2 and T3 are untenable.
I have criticized some of Lem's momentary exaggerations, but should not conclude without pointing to ways in which his thinking about virtual reality remains far more cautious and astute than that of the current postmodernism enthusiasts. Lem asks to what extent the most successful form of phantomatic illusionism - along the lines of T2 - could be generalized and made the basis for a new kind of civilization in which everyone would enjoy the ersatz realities provided by some vast system of phantomatic machinery (in short, the sort of society of the simulacrum evoked by various postmodernists). In this regard, Lem's conclusion is that "no civilization can phantomize itself one hundred percent" (20). Although the agents could be given the effective illusion that all of their needs and fantasies were being realized, the underlying reality of their biological needs would remain. Should these be satisfied by some vast system of apparatuses, the agents could go on living in their illusions for a time, but their failure actually to reproduce would make of them the civilization's final generation - a remarkable form of collective euthanasia, Lem remarks. Here we encounter, as is so often the case, Lem's bracing realism. Matter exists, he asserts, and although modern physics proposes different theories of it, only one of them can be right; whatever difficulties we may have in recognizing the external, mind-independent reality, it is there, and no culture can afford to divorce itself from the effort to fashion truthful and effective representations of it. As Lem aptly points out in his rebuttal of Kolakowski, there is an important difference between taking an interest in our culture's increasing powers of simulation, and hailing them as the advent of a radically new and wonderful era.