Michael Mateas responds

Michael Mateas responds

2004-01-09

As alternatives to agency-obsession, “critical technical practices” that connect art and technology are front and center in the work of Michael Mateas.

Phoebe Sengers’ paper describes a new practice of Socially Situated AI (SS-AI), and provides a concrete example of this practice in action. As Sengers points out, SS-AI shares an affinity with other culturally-oriented AI practices, including Simon Penny’s, Warren Sack’s, and my own practice, Expressive AI (E-AI) [Mateas 2000]. In this response I’d like to briefly explore some of the similarities and differences between SS-AI and E-AI. I’ll first provide a brief description of the two practices and then examine similarities and differences in four different thematic categories. Sengers provides three postulates for SS-AI:

1. An agent can only be evaluated with respect to its environment, which includes not only the objects with which it interacts, but also the creators and observers of the agent.

2. An agent’s design should focus, not on the agent itself, but on the dynamics of that agent with respect to its physical and social environments.

3. An agent is a representation.

Rather than thinking of an agent as an autonomous thing-in-itself, it is now considered as a communication between a human designer of the agent and an audience which views the agent. This moves the focus of agent design from autonomy to human readability.

E-AI is a new interdiscipline of AI-based cultural production combining art practice and AI research practice. E-AI also changes the focus from an AI system as a thing-in-itself (presumably demonstrating some essential feature of intelligence), to the communication between author and audience. The technical practice of building the artifact becomes one of exploring which architectures and techniques best serve as an inscription device within which the authors can express their message. This explicit manipulation of the authorial and interpretive affordances of AI architectures requires a reflective stance in which the stories narrating the operation of the machine, and which give the machine meaning, are self-consciously designed. Instances of E-AI include Office Plant #1 [Bohlen and Mateas], a sculptural installation which responds physically to the content of email being received at the computer terminal where it is “planted,” and Terminal Time [Mateas, Vanouse and Domike], a mass audience interactive story generation machine which constructs ideologically-biased documentary histories in response to audience feedback. E-AI does not single out a particular technical tradition as being peculiarly suited to culture production. Rather, E-AI is a stance or viewpoint from which all of AI can be rethought and transformed.

Critical technical practice

Both SS-AI and E-AI are instances of what Agre calls critical technical practice (CTP). Agre defines CTP to refer to a scientific and technical practice which engages in a continuous process of reflective critique of its own foundations. This reflective critique consciously constructs new, contingent myths to heuristically guide the practice. No fixed-point is ever sought or found. A CTP is in a continuous state of revolution.

A critical technical practice would not model itself on what Kuhn called “normal science,” much less on conventional engineering. Instead of seeking foundations it would embrace the impossibility of foundations, guiding itself by a continually unfolding awareness of its own workings as a historically specific practice. It would make further inquiry into the practice of AI an integral part of the practice itself. It would accept that this reflexive inquiry places all of its concepts and methods at risk. [Agre, p. 23].

In Sengers’ case, a recurring failure of autonomous agents (their incomprehensibility to human observers) led to reflection, using the tools of schizoanalysis and analysis of institutionalization, on the underlying assumptions of autonomy in AI, which in turn led the way to a new architecture and design approach focused on agents-as-communication. In my case, a focus on the interpretive and authorial affordances of AI systems leads to reflection on and explicit design of the rhetorical systems by which the meaning of AI systems are narrated.

Primacy of agency

SS-AI focuses on agents, that is, software which can be understood as an intentional entity, like a human or animal, responding to and pursuing goals in an environment. In particular, SS-AI replaces “autonomy” with “communication” as the driving design consideration in the construction of agents. Within the AI community, the debate between situated and Good Old Fashioned AI (GOFAI) is organized around the notion of an agent. Historically, situated AI appeared as a reaction to recurring problems appearing in GOFAI in the design of complete agents, particularly robots. In recent years much AI research has been organized around the metaphor of “agent,” to the extent that a popular introductory AI textbook describes itself on the cover as “The Intelligent Agent Book” [Russell and Norvig]. However, AI-based cultural production is broader than agent design. For example, while Office Plant #1 can be understood as an agent, Terminal Time is not an agent (at least it can’t be understood as an agent without broadening the notion of agent until it is vacuous), and yet is indisputably an instance of AI-based cultural production. Agency, and associated concepts such as autonomy, is only one rhetorical cluster within the broader discourse of AI. An AI-based artist aligning herself too strongly with the notion of agent may miss out on a rich field of alternative strategies for situating AI within cultural production.

How fundamental is SS-AI’s focus on agency and the associated critique of autonomy to SS-AI as a practice? What would it mean to extend SS-AI to other metaphor clusters within AI? In the three postulates of SS-AI, if one just replaced the word “agent” with the phrase “AI system,” what would this change? Do the three postulates summarize the results of the analysis of autonomy (ie. schizoanalysis, the analysis of institutionalization, the application of narrative psychology) or do they serve as a framework within which many humanistic interventions (perhaps not organized around the notion of agent) can be made, with the analysis of autonomy as a concrete instance?

Disciplinary relationships

Simon Penny describes his art practice as sitting on three legs: technical practice, art practice, and cultural theory. This same formation is found in E-AI. Cultural theory serves as the transducer, the relay between art practice and technical practice.

Criticism (i.e. demystification) alone is not enough to be CTP. The toolbox of the critical theorist is primarily a toolbox of negation. There is certainly a mass of naturalized ideology surrounding technical practices, and negating this naturalized ideology, and thus opening the path to alternatives, is precisely what makes critical theory part of CTP. But a CTP also continues to produce positive knowledge within its domain. E-AI continues to produce AI research which can be understood as new knowledge by other AI practitioners. But this new knowledge is now being produced within the maelstrom of an always contingent, infinitely ramifying self-reflective loop rather than from the stable, unmoving comfort of naturalized bedrock. Knowledge production is not a project of negation, but rather of construction. Tools of negation alone will not suffice.

In E-AI, the three-legged stool of art, AI and cultural theory provides a combination of tools of construction and tools of negation. Art and AI provide two “positive” modes of knowledge production which mutually inform and alter each other; critical theory keeps things boiling, preventing sedimentation and the eventual naturalization of a bedrock, providing a language for uncovering and articulating assumptions, and serving as a transducer/relay between the two practices.

This tripartite structure is visible in both Sengers’ and Agre’s work. In Sengers’ case, schizoanalysis and the critique of institutionalization (critical theory) diagnoses a problem in reactive agent architectures, while Brunner’s narrative psychology (psychology) – when understood within the heuristic matrix established by the schizoanalysis of agents – produces new knowledge in both AI (a technical solution and, more importantly, a story about that technical solution) and narrative psychology (it says something new about Brunner’s theory through concrete actualization). In Agre’s case, deconstruction (critical theory) is used to invert hierarchical oppositions latent in AI research (particularly the inside/outside master narrative of mind/world), while phenomenology of everyday life (ethnography, micro-sociology), when understood in the heuristic matrix produced by the deconstructive analysis, produces new knowledge in both AI (a technical solution and accompanying story about the technical solution) and ethnography (it says something new about ethnographic understandings of life routines through concrete actualization).

Representation

Both E-AI and SS-AI are in complete agreement that an agent (or more generally, an AI system) is a representation. E-AI practice is concerned with articulating doubled rhetorical and technical machines which provide an appropriate framework within which an experience for an audience can be authored. The system (and accompanying rhetorical formation) must function (be readable and inscribable) for both the author and audience. In Sengers’ case, the technical support for signs, signifiers, transitions and meta-controls, when coupled with the accompanying story about the incomprehensibility resulting from black-boxing and the need to support intentional (narrative) understanding of agents, provides a framework for agent design which is readable and inscribable by agent designers. Agent designers can then use the framework to build agents that in turn are readable and inscribable, on a different semiotic level, for the agents’ audience.

References

Agre, P. 1997. Computation and Human Experience. Cambridge, UK: Cambridge University Press.

Boehlen, M., and Mateas, M. 1998. “Office Plant #1: Intimate space and contemplative entertainment.” Leonardo, Volume 31 Number 5: 345-348.

Mateas, M. “Expressive AI.” 2000a. SIGGRAPH Art and Culture Papers, Proceedings of SIGGRAPH 2000.

—., Vanouse, P., and Domike S. 2000b. “Generation of Ideologically-Biased Historical Documentaries.” In Proceedings of AAAI 2000. Austin, TX, pp. 236-242.

Russell, S., and Norvig, P. 1995. Artificial Intelligence: A Modern Approach. Prentice Hall: Upper Saddle River, N.J.

Lucy Suchman responds

Phoebe Sengers responds