Lucy Suchman responds (excerpt)
Phoebe Senger's paper reenacts the very experience that it describes. Like the psychiatrist facing the schizophrenic patient, the multiple narratives that the paper presents pose increasingly insurmountable problems of integration for me as reader. Two larger fragments -- a critical project and the account of an agent system -- sit side-by-side in uneasy (dis)association. As a reader I find myself following with growing interest the unfolding narrative of schizophrenia and its implications for AI, when I'm suddenly thrown without warning, like Alice through the looking-glass, into a world of agent systems-building, motivated by that world's characters, problems, projects, and prospects.
The latter narrative becomes increasingly incomprehensible until I realize that this is not a story that can be resolved into any single, familiar frame. What if, I find myself asking at the end, Phoebe Sengers' story were renarrated not as the design of an agent system, but along the lines that she herself recommends at the close of her paper; that is, as the design of a new genre of computationally dynamic, communicative animation?...
And what, to return to where we started, might the prospects be for working together the insights of narrative psychology/antipsychiatry and AI? A first step, I believe, is to recognize the profound difference between a textual (or even dynamically graphical) narrative, designed to support comprehension, and the coproduction of mutual intelligibility. As Sengers points out, for most of us there is no invisible hand stipulating our "intentions" or "reasons" for action, or setting the parameters of behavioral "simplicity," or assessing whether our intentions have been "properly communicated." Nor can what has been communicated be definitively tracked and managed. Rather, as the case of the "schizophrenic" so clearly shows, our stories are always provisional and ultimately extremely fragile. Life is only metaphorically a story.