An autobiographical reflection by Warren Sack, prompted by two particular questions.
In the course of my response, I would like to address two questions: one from each of my respondents.
Phoebe Sengers: How does Warren's work instantiate a critical technical practice, and what does it tells us about critical technical practices and what they could be?
Rebecca Ross: How does participation in an event or structure, such as a very large conversation, via a map or overview of the structure, alter the nature of the actions we choose to take upon and within that structure?
I think the two questions are closely linked. They both are questions about methodology and, implicitly, questions about what counts as a finished piece of work. I will attempt to answer both of them by starting in an autobiographical mode. My hope is that this autobiographical approach will make my methodology clearer and, also, fit within the theme of this project, First Person.
During the last two years of college, and then for three years more after college, I worked as a research assistant at the Yale Artificial Intelligence Laboratory in the Cognition and Programming Project. My job was to evaluate and then improve a piece of software, PROUST (Johnson, 1986), that a Ph.D. student had written for his dissertation. PROUST was a program to analyze programs. Specifically, PROUST was designed to be an "intelligent tutoring system" to analyze novices' computer programs and then provide English-language feedback to the students concerning the strengths and weaknesses of their computer programs.
The PROUST system comprised about 15,000 lines of code written in the Lisp programming language. It was written in a symbolic AI style. By this, I mean that its functionality - both in the comments included in the code and in its presentation in published papers - was narrated as if the system itself "understood" the intentions of the novice programmers. While this use of a vocabulary of "intentions" and "understanding" was, at least at the time, a rather standard means of describing and structuring an AI program, it was not at all an acceptable practice for programming language designers. I struggled for years to improve upon the design of PROUST by using a symbolic AI approach. Ultimately I abandoned this approach, undertook the same task using the tools of a computer programming language designer, and thereby succeeding in improving on PROUST. I designed and implemented a new system, the ProgramCritic (Sack, 1992) that was two orders of magnitude smaller than PROUST, faster than PROUST, and more extensible
My design of the ProgramCritic was done using the lexicon and rules of the programming language design game. I used a vocabulary and set of aesthetics peculiar to the practice of computer programming language designers. My claims were phrased in terms like these: declarative semantics, matching interpreter, dynamic and static scoping, function calls, and Prolog's variable instantiation and backtracking mechanism.
In contrast, the author of the PROUST system did not design PROUST with the language design game in mind. Rather, he was working under the constraints of a different "pre-paradigm," a different language game, that was current in the environment in which he designed PROUST (the Yale Artificial Intelligence Lab from 1981 to 1985). One might call this Yale game the AI/cognitive science game because the point of the game was to identify "real," mental structures (like schemas, goals, and plans) and then code them up in computer programs meant to exhibit some sort of "intelligence."
By refusing to play the AI game and by taking the perspective of a language designer I was able to point out the weaknesses of the knowledge representation languages used in PROUST. I was able to show how one might characterize the bulk of PROUST as an indirect means to implement a Prolog interpreter (i.e., the interpreter for a particular kind of computer programming language, Prolog). I implemented the ProgramCritic in Prolog. It was then used as the "smarts" for a programming environment for novices. The system was field tested in several high schools and I was issued a patent for the work (Sack and Bennett, 1993).
Right about this moment (late-1990), I met Phil Agre. I found, to my delight, that Agre had a phrase that seemed to describe the work experience that I had just lived through. Agre calls this simultaneous pursuit of technical work in conjunction with a critical attention to the ideological limits of one's practice a critical technical practice (Agre, 1997).
Agre's story about his dissertation work has a variety of parallels to my own story. Most obviously both his work and mine led us to leave AI practices behind to successfully address a problem that was originally posed to us in AI terms. For both of us, part of this "exit" from AI entailed abandoning the AI practice of describing machinery in terms of intentions, plans, and goals and - instead - adopting/adapting a non-intentional vocabulary. For Agre this other approach included an analysis of daily routines (rather than "problem solving methods") from a phenomenological perspective. For me this other approach was closer to my starting point: it entailed an adaptation of the methods of computer programming language design rather than the methods of AI.
At the time, this move away from AI uses of "intentional machinery" appeared to me to be the main mark of a critical technical practice. Assuredly this move away from attributing intentions and cognitive capacities to a machine is a very significant move away from AI and cognitive science in general (which, arguably, were engendered to introduce this possibility in contradiction to the tenants of the behaviorism that was hegemonic in Anglo-American social sciences of the first half of the twentieth century; see Gardner, 1986; Turkle, 1987; Agre, 1997).
However, I now realize that this is only one interpretation of Agre's proposal for critical technical practices. Nevertheless, I think this is a good way to understand critical technical practices: it entails having one foot in an AI Lab and the other in science studies or cultural studies. I.e., the other foot needs to be in an area that can give one perspective on the limits of what one is doing back at the lab. Perhaps, following Noah (Wardrip-Fruin and Moss, 2002), one needs three feet to participate in a critical technical practice. That may be the case, but my point is simply that those of us who right now call what we do a critical technical practice have all, at one time or another, found our way out of an AI Lab (or a close variant thereof) via a critical reflection on our lab practice (e.g., Sengers, 2001; Wardrip-Fruin and Moss, 2002).
But, just finding the way out of an AI Lab seems like too insubstantial a credential to amount to much. Some, like Agre, have not only found the exit, but also created a permanent exit insofar as the critical work resulted in permanent changes to the field of AI. Agre's work, in conjunction with his colleagues' work, in situated planning has opened the door for entirely new kind of work in AI. This, on the other hand, seems like too steep a price to pay to claim the title of a critical technical practitioner. In fact, it opens other issues too: is the practice still critical if it can be so easily digested by the lab it emerged from? My own initial work, although quite modest by comparison, begs the same question: just how "critical" is a piece of work that simply moves one from part of the computer science department (the AI Lab) down the hall to another part of the CS department (compilers and programming languages)?
My understanding of a critical technical practice is now less dependent upon my understanding of AI. I think a critical technical practice is one of working on issues and artifacts considered to be technical (e.g., computer programs, etc.), but also being aware of the paradigm (in Thomas Kuhn's sense; Kuhn, 1972) or language game (in Ludwig Wittgenstein's sense; Wittgenstein, 1958) that bounds one's thinking about the issues and artifacts of the work. This definition implies that one also knows about alternatives to the paradigm or language game within which one is working.
This sort of practice is a bit dangerous since it licenses one to read and study practically anything to get a new perspective on one's topic of research. In fact, many visitors to my office are puzzled by the wild array of books I have on my shelves and on my desk; books of sociology, anthropology, media studies, computer science, artificial intelligence, linguistics, philosophy, art, art history, design, architecture, and so forth. This eclectic reading is a symptom of the critical technical practitioner who - for whatever reason - has decided that a technical issue or artifact can be built and/or studied using an entirely different body of knowledge, and an entirely different paradigm or discourse from that which is usually native to the subject.
For my particular work, it was important for me to understand that one could design software using knowledge and criteria that were usually considered alien to the enterprise of computer science. So, while psychology has often been used to evaluate user interfaces and other types of software, one might also imagine the use of sociology to evaluate the same software. Indeed, I think this is not a bad characterization of a lot of very interesting work in computer-supported cooperative work (CSCW). As more and more people from different backgrounds begin to experiment with the design of software it becomes more and more likely that new genres of software will be developed. These new genres might have theoretical "backbones" from far flung areas; not just sociology, but biology, literature, film, art, etc. These areas will be the generators of new paradigms of software design.
The Conversation Map was my dissertation work that I did at the MIT Media Lab. It was an attempt to render in software a variety of views about conversation, discourse, and dialog. These views included ideas from sociology, linguistics, philosophy, and art. My approach to the Conversation Map project was different from the way I approached the ProgramCritic because I started with the idea that it was ok to mix paradigms. However, such mixtures make it difficult to decide how such a piece of work should be evaluated.
One would like the real measure of success for a critical technical practice to be, as Sengers suggests, a larger cultural intervention. For instance, if my work at and after the Yale AI Lab had really been critical, one might imagine that it would have transformed education or at least pedagogy in some significant manner. But, again, this seems an unreasonably high expectation. It would be akin to asking all physicists to discover a new law of nature before they could properly call themselves a physicist.
Instead, it is probably more reasonable to evaluate any given production of a critical technical practice from multiple perspectives. One evaluation can be done for each theoretical paradigm invoked in its construction. So, I think Ross's implicit question is a perfectly reasonable one to expect answered in the evaluation of the Conversation Map -- "So how did the discussants react or change when they used the Conversation Map system?" But, this question has at least four variants: one for sociology, one for linguistics, one for philosophy, and one for art. Obviously these will entail entirely different criteria for success. The Conversation Map might be said to have succeeded on an artistic level, but failed as an entrant in the world of sociology, etc. I think, right now, I have a philosophical evaluation of the Conversation Map, but I have only begun to answer the other variations on the question.
Ross's question is also a variation on the meta-question that applies to all of us trying to perform a critical technical practice. Namely, how do researchers react or change when they have a representation of the larger discourse within which their work is embedded?
back to Beyond Chat introduction