Scott Rettberg, director of the Center for Digital Narrative (CDN) is joined by Lai-Tze Fan to discuss gendered AI assistants and the invisible labor involved in editorial work.
SR: Welcome to Off Center, the podcast about digital narrative and algorithmic narrativity. My name is Scott Rettberg, and I’m the Director of the Center for Digital Narrative at the University of Bergen. In this podcast, I’ll have conversations with the researchers at the center, as well as other experts in the field, to discuss topics revolving around digital storytelling and its impact on contemporary culture. Today, I’m here with Lai-Tze Fan, and we’ll be talking about gendered voice assistants, as well as Lai-Tze’s experience as a professor, editor, critic, and creator in the field of digital culture. Welcome.
LF: Scott, thanks for having me.
SR: Lai-Tze, you’re an Assistant Professor of Technology and Social Change at the University of Waterloo in Canada. And you’ve recently been appointed as the Professor II with the 20% position at the Center for Digital Narrative here at the University of Bergen. I see you’ve also worked at Lingnan University in Hong Kong, as an Assistant Professor of Digital Culture and Narrative. You keep collecting cool titles. And you’re also a researcher in the AI Institute at the University of Waterloo. Can you say just a little bit about your journey as a researcher that brought you to this point? And also, what is an Assistant Professor of Technology and Social Change?
LF: That’s a good question. And probably the trajectory that I’ll describe leads into that final answer about what that position entails. I’ve had a very interdisciplinary path that started with English literature, a lot of my youth was spent writing and reading and surrounding myself with different forms of storytelling. And at some point, that became more exploratory. I understood that the book was just one of many, many forms of media. So, I started to think about what that looks like in film and games, and eventually digital and virtual forms. As I moved on, it started to be like asking, essentially, about the future of literature. And I was very lucky to work with wonderful scholars who were part of the elit community and that opened some doors for me, but also started to show me models of scholarship that are not necessarily traditional, but that can be super cool, and encouraged me to also try new things out. So eventually, especially through a postdoc that I did at Concordia University in Montreal, which is arguably the birthplace of what we call research creation, I started to ask questions about design and all of the media forms and technologies that I was looking at. And in doing so, in designing a few things for myself, I realized that sometimes when it comes to end users, we already have somebody in mind, and groups are being left out, or they’re not necessarily representing everyone. And I understand that certain products and objects have certain functions, but asking about the benefits and risks of that is of increasing interest to me, and especially in terms of the user interface accessibility experience. And when this position came along in technology and social change, to address the second part of your question, I thought it was a really great and challenging next direction, but allowed me to ask larger questions about policy, legislation, what’s kind of going on behind the scenes, even beyond back end and hardware.
SR: Technology at the level of the companies that are making the software, maybe at a governmental level as well?
LF: Yes, definitely. And understanding that relationship amongst artists and scholars, but also the government and industry. How does that all work together such that we end up with the kinds of technologies that we have, and I think that is a question of social change, hopefully in a good direction.
SR: Or good and bad at the same time.
LF: And understanding that it will have both facets and all those different colors and shades is part of what I’m interested in understanding more.
SR: Maybe let's go further back. I’m Gen X, and I guess you’re a millennial?
LF: Yes, you can tell from the all-black millennial uniform.
SR: I’m curious about how you related to technology growing up. I don’t know if you remember a time before the web, for example, and how you personally experienced those social changes that occurred from your youth to the present?
LF: That’s a good question. And I do remember a time before the web. I didn’t really have it until I was, I think, in my teens, and did not have a computer at home until then, not the internet either. For instance, I’m still enough in that generation that I had typing classes. Keyboard typing classes, not typewriter classes.
SR: We used an actual typewriter.
LF: Yeah, so that was still something that needed to be taught at school because it wasn’t so ubiquitous as what we have now. And I guess my typing form is pretty good because it was formalized, the same also with my grammar. Everything’s been kind of formalized along the way. But in terms of technologies and the things that I’ve seen, I was always very attached to print, not so much technologies, especially ones for communication, that are electronic or digital. The interesting thing is that my friends and I were all on an early form of social media on the Internet in the late 90s, early 2000s, when we were using GeoCities and something called AsianAvenue. But it had the same model in that it was not automated, there were no templates.
SR: You had a homepage.
LF: Yeah, we had homepages. And that’s how I learned how to start coding, because that was what my friends and I did for fun, to interact with each other online. But that was HTML 4.
SR: I remember when it was HTML, just HTML.
LF: Before there was anything else that was supposed to come after that. So, before that, it was an outlet for creativity. And it was a way to connect with others that created a lot of social change. We could connect at home. We were using ICQ and MSN and all of that.
SR: Texting on phones came in fairly early. DMing or IMing, right?
LF: Yeah. So, to be unplugged from the computer, though, I think was not until I was in university. In order to talk to friends, you had to say things like "AFK, I have to go to the store. I’ll talk to you when I’m home." You couldn’t reach out to others, and your parents couldn’t find you because you didn’t necessarily have a cell phone.
SR: You’ve done a lot of work now in the field of electronic literature, including recently winning the 2022 N. Katherine Hayles Award for Criticism for editing the first issue of The Digital Review on critical making and critical design. What is The Digital Review, and how is it distinct from other kinds of journals? And also, what is critical making?
LF: Thank you for asking that. Because I think a lot of people will associate the editorial work I’ve been doing with ebr, electronic book review, which has been edited by Joseph Tabbi for a long time. TDR came, not necessarily out of the pandemic, but it was manifested in 2020. So, the timing seems as if it was a reaction, but it had already been in the pipeline, and also in the mind of our Managing Editor at ebr, Will Luers. He was interested in a space that could facilitate some of the more experimental works we were getting as we’re in writing. So, essays that also include code. And he edited issue zero of TDR, which is on "Essayism," as a response to the fact that writing on the internet, writing in digital forms, doesn’t necessarily look like traditional forms of publishing and writing. We needed a space to be able to support that. And then as we moved on with that, I was very lucky to edit the next issue. And I received federal funding for it, which is fantastic because I could build this big team and I gave students all these opportunities to learn about digital publishing.
But I wanted to work on critical making and critical design because what was really cool, especially with that as a platform for talking about elit, is that I understood that there was this connection and relationship between the works that are being produced creatively, and the scholars. And the artists, of course. The scholars who know the artists, they write about them. And we see that dynamic, for instance, at the annual ELO conference and meeting. But it’s not always represented in publishing, because it’s hard to have, for instance, the exhibition, the installation or the work as it lives digitally. Then the scholars who are doing that, because there’s a sort of dissonance, that they’re not happening at the same time--TDR is actually the right platform to bring those two groups together. And often they’re the same people. People like Nick Montfort, like Caitlin Fisher, who are writing and making and are in conversation with each other. But more than just allowing, and more than just having that issue represent the fact that those two things are almost like two sides of the same coin, I wanted to show that a lot of that development and design process, that there’s a labor of love behind that. That it is a process, that things don’t work sometimes. And that there is an unseen part of all of that making, which is a form of invisible labor, including, like with collaborative work. It’s not just a final copy. What about the drafts, what about prototyping?
SR: Yeah, there’s a lot of labor that goes into that. And I think one of the interesting things is that from the beginnings of internet scholarship, there’s always been this question of why isn’t scholarship about digital media in digital media? And there were some attempts, like Kairos, a journal that to a certain degree embraced that for a long time. But it’s something that didn’t really take off in the way that I think a lot of people expected it to. There’s a sort of built in disciplinary bias, I guess you’d say, that has continued on. And I think we all have reason to be proud of what the ELO conference is, that there has always been this crossover between art practice and digital literary practice and criticism. That critics know each other; the artists know each other. And oftentimes, they’re in the same body.
But maybe we will come back to that. I did want to continue on this theme of editorial work. Because I know you have done a lot of work with ebr over the years. And one of the things I’ve talked about with ebr’s editor, Joe Tabbi, is that the editorial work often doesn’t get enough credit. Younger academics are so driven by publishing metrics now, to get a job and get tenure, that some of that work of editing falls by the wayside. Joe calls this the death of the gift economy, in academia. And I know that for me coming up that kind of work, doing an editorial work, and those experiences were formative to my career. So, what’s your experience been in this regard? And what have you kind of gotten back out of all this editorial work that you put in?
LF: I think it’s a difficult reality that there’s both a desire to be involved in the community, but not understanding that some of that is done as a sort of act of interest and passion and not necessarily something that will translate very well. How do you, for instance, measure that editorial time as a metric that one’s institution could judge in a way that they deem to be a contribution? And how does an emerging scholar represent that in a CV when it’s just one line: editor. But that can mean hours and hours every month. That can mean—
SR: A lot of invisible labor, as you said before.
LF: Yeah, a lot of that. I’m interested in how many people and what types of people are involved in that invisible labor. Not to sound too corny about it, but it really was just for me out of interest and passion. I really wanted to be more involved in the community. I acknowledged though that I had the luxury of doing so, that I understood that this was like something that doesn’t necessarily translate well to metrics, but that I still wanted to do it because I wanted to learn. But sometimes people cannot afford that time, especially if they’re being overworked or if they’re in a contract position.
SR: A 3/3 position or a 4/4 position.
LF: Yeah. And I’m lucky enough that I didn’t have those kinds of positions, there was always an element of research. I think maybe all the jobs I’ve applied for there was an interweaving of community oriented, public facing dissemination and outreach. Such that I have always been able to, I won’t even say justify, but just weave that in as part of my story, it’s a necessary part, that I was not alone in the office all the time and have never wanted to be.
SR: I think that keeps us sane, to have people to talk to, you get introduced to the field that way. One of the things that I’d say, as a distinguishing feature of the field of electronic literature, as I mentioned, is that many of us working with elit cross between what you might call more traditional academic study, and creative practice that we think of both as experiments that might feed our critical work, and also hopefully, as literary art. So there’s a lot of hat switching and code switching involved with that. So, what do you, and I know you’ve done a lot of that, think are the challenges and benefits of working simultaneously in both critical and creative camps?
LF: I would say the main challenge of doing both critical and creative work is that it’s not always legible as a focused approach. If the direct relationship for the ways in which those two things can foster and supplement each other, if that’s not always legible, then it can be hard to make a case for why it’s a strengthening part of the process. But at the same time, the benefit of it has always been for me a way to sort of reflect on how theory becomes practice, how I can come at a problem from a creative perspective that doesn’t necessarily always mean that I’m making art, but it can also just be a creative solution to a research question. And that can look like a form of publishing I didn’t think of, including creating new venues and platforms with which to publish, but then also understanding that probably many people are asked to do that now. More work that represents theory and practice, especially in the humanities.
SR: Yeah. I mean, this was a big hard sell for me as a teacher, to say that, in order to learn about digital culture and think about it critically, you need to engage with it. And one of the ways you can engage and understand it best, I think, is through making things and having fun making things. But also, that’s how things reveal themselves. That’s how writing with ChatGPT right now teaches you as much about how large language models operate. Certainly, more than just reading all the pundits responding to it as things move on.
I also like the fact that a lot of your creative work is collaborative, including the work Decameron 2.0, that was produced with a large group of people with different nodes in it. That just received an honorable mention for the 2023 Robert Coover Award for work of electronic literature. Do you think there’s something about the digital environment that lends itself particularly well to creative collaboration?
LF: Yes and no. So, for instance, being physically here in Bergen has been wonderful for facilitating collaboration such that there’s this embodied experience, and you can really focus and get a lot of work done with a group, and the conversation doesn’t necessarily have to continue, because if we were doing this digitally, at some point I might have to say, sorry, I have to go wash the dishes and continue on with my life. But at the same time with this specific example of Decameron 2.0, it came out of a writing group of women who all came together because of the pandemic, because we could not get together in person. And many of those women I’d never met before in my life, and they live in different parts of Canada. But because the pandemic isolated us physically, the virtual became a place to gather and a way to connect. That would not have been possible if we had not gone digital, not gone virtual. There was no social etiquette to how to behave on Zoom at the time, but we found a way to build community at the same time, and in so doing creating works. It would be like quiet writing time, and it lent itself well, for instance, when I started teaching virtually, and I just don’t think that would have been possible without a virtual environment.
SR: That’s really interesting. I did a documentary with Anna Nacher, Søren Pold and Ashleigh Steele, about COVID elit, about things that people were making, authors and digital artists, in response to the pandemic. There was this moment where you, at least for me, I sort of needed that contact not only with friends and family, but also with colleagues. And there was this moment of, let’s do something together, maybe because we’re kind of driven away from human contact, embodied contact, which I think we all missed. Maybe when we look back in terms of what pearls we can take out of that experience of the pandemic, in addition to the sourdough bread and all that, it was that experience. And hopefully, we won’t forget it.
LF: Yes, it will be interesting to see how forms of collaboration and new forms of scholarship emerge and respond to that. It’s normalized virtual or hybrid experiences. I’m glad for that, in some ways, but it also allows us to imagine that maybe writing can happen in different tools, that we’re all connected through this Google Doc. But it could also be, here’s a virtual space, we’re going to build something in Unity. Let’s all have fun and play in here together with our different avatars. I’m sure you saw a lot of that with the creative responses in the exhibition on COVID elit. But for a brief while, especially at the beginning of the pandemic, I felt more connected to people I hadn’t seen in a long time. And it felt like a very freeing time to just say, what have I not been working on for a while that I can kind of pick up on? Again, I understand that was a brief moment.
SR: You didn’t have to commute; it was sort of commuting to your pajamas and to the screen. Although at this point, I’m now like many people, I recoil a little bit from the all-virtual, like online conferences, for example. When I go to those, I just so miss the after the session when you’re in the hallway, or when you go have a coffee or a beer, these things don’t happen on Zoom.
LF: Do you remember when you, especially after a particularly engaged and intense conversation at a conference or panel roundtable, or if you gave a keynote, ended the call and just the vacuum?
SR: Yeah.
LF: It was so awful.
SR: And I still get asked to do these. And I say, oh, it’s been nice to be in India (virtually), but I don’t know, I don’t feel like I connected with them in the same way as if I was actually in India. But anyway, there’s good and bad things that have come out of that. I just read your paper a couple nights ago that was just published in the Digital Humanities Quarterly, "Reverse Engineering the Gendered Design of Amazon’s Alexa: Methods in Testing Closed-Source Code in Grey and Black Box Systems." There’s a really interesting mix of theory and experimental research in this paper, looking at code and biases embedded in voice assistants, and some independently programmed skills as they call them in Alexa, but it’s technically called something else, right?
LF: So, the assistant is called Alexa, but it comes in different hardware, I think one of them is called nest or something. Or Echo.
SR: It’s the Echo, yeah. But anyway, these skills are made for them. So, tell us about your work there.
LF: I’m going to try to see if I can connect this, but I might not be able to do it elegantly, but I’ll try my best. It is a continuation of some things I’ve been talking about with Joe Tabbi in relation to our work for the Center for Digital Narrative. So, the work there and that article are part of a monograph-length work that I’m developing called Unseen Hands, which is about invisible labor and the ways it’s been gendered. Primarily, that a lot of technologies that we think of as gendered are already gendered by design, even before they get into the hands of users, in the more classic capital circuit of production, distribution, and consumption. I’m asking about pre-pre-production, like who is designing these things? What kind of decisions are they making, such that it causes a chain reaction, and not only validates, but maybe even strengthens certain types of social norms and stereotypes? And the methods part of that paper came about by accident, because Amazon and more so Apple actually, they’re fairly closed source about a lot of their systems and what they allow people to come in with. So, I just analyzed the responses of Alexa herself, I choose to just use that pronoun. It was necessary that I did a little bit of digging into what the parameters were. If I cannot know what’s in the black box, I can at least sort of feel out the walls.
SR: I think people sometimes overplay the black box.
LF: That’s why I describe it as potentially grey box, because I can still do a little bit of triangulation. Oh, that that call went out this far, which means that potentially this means that, and so there were a lot of triangulations as a method there. But primarily, even though it looks like it’s a paper that’s just on gendered design, because it’s part of a book, there are larger questions here about systems and systems literacy. In particular, the ways in which information and structures of information represent things in a way that are streamlining, scaffolding, almost controlling our understanding of information, and sometimes our behaviors. So, I know that someone like Safiya Noble has done a lot of this work in terms of understanding how the Google search engine streamlines, tags, and classifies information such that some of the older search results were clearly problematic and biased. But they were sort of forming an idea and a representation of the world that was being presented as if it was truth or knowledge, instead of representation. And that’s where things start to get difficult—when we automate that process. And we stop asking and remembering that there are people making those decisions and just saying, it’s math, how can it be biased, but it’s not just math. People have to make those decisions in terms of what to privilege and how things are weighted including in ChatGPT, as you mentioned. So, this is the thing that I’m interested in, and thank you for asking me about my latest research and connecting it with the work that I’m doing. For instance, thinking about this new digital narratives database, it’s about also relaying information such that it’s not forcing people into understanding conceptions of information that are not faithful. And maybe a good example of that is, for example, if I just had a list of contributions and critical works, I can’t and wouldn’t want to put a conference paper and a monograph as having equal weight, even if they’re both critical works. So, I have to be able to represent that, because I would do so in a CV. Those are different things, and they take different amounts of time.
SR: And they are marked like that. You can see that it’s monograph.
LF: Yes, they are. And then so thinking about the taxonomy of something like that…
SR: How to display it in a way that it’s interpretable?
LF: Yes. And, to understand that this is a field that doesn’t necessarily privilege critical works first. So, for someone like Jason Nelson, Caitlin Fisher, who’ve built a lot of their careers around creativity, creation, and making, how to let them know that the way that they contribute to a field like elit, to digital narrative, the creative works should be prioritized when someone’s scrolling through an entry about their works. And maybe, to bring in as another example, if someone like Dene Grigar, how to represent archival work, when it’s not your own output, but it’s the work that you do to aggregate, preserve and disseminate the works by others. It’s not necessarily literate in a lot of traditional forms of academia and academic assessment. So, making sure that as a field, and the Center for Digital Narrative is helping to communicate that the practices are different here, is I think, something that Joe and I are both very interested in. It’s something I’ll continue to think about because it’s very much related to issues of people taking for granted that anything they see online, including even just something like the taxonomy of something is naturalized or centralized. It’s not.
SR: Yeah, and any direction you take it in, there’s always another side to it. So, for example, in the ELMCIP Electronic Literature Knowledge Base, originally, gender wasn’t marked as a field. And then it was, after we had a workshop at the ELO conference, where people said, “I’d really like to be able to study representation over the course of the history.” And then there were some complaints eventually about that, for example, a trans person had changed their gender and that wasn't reflected in the database, and really expressed how she felt harmed by that. And then we debated it and decided again to go back to the no gender field, because this had turned into it an act of marking, that it’s up to the individual authors to determine. At the same time, they didn’t want to report it themselves. So, it gets to be really complex. Let’s say, your example, if you put creative works first, what does that say to the people who are critics and only critics? These values are quite complex and are interesting debates in terms of thinking about that.
We have to wrap up pretty soon, but I did want to get back to the explosion and the cultural effects of large language model-based AI systems, like ChatGPT. And this sort of connection to the gendered labor that you write about and your work on voice assistants, and how that’s sort of reflected in and changed by these newer types of AI. So, after I read your paper, there’s this classic example of sexism in the phrase: make me a sandwich, the implicit command in a way that was very sexist phrase and you give many examples of how voice assistants reflected that. And now I plug that into ChatGPT, and maybe there’s still hints of that, but it’s changed. The other thing is that I asked ChatGPT if there are biases embedded in that phrase. And actually, it was really good at giving a list of biases represented by that phrase, so this is a different environment, where there is no expert system guiding us to particular responses. Instead there are patterns matched in this massive corpus of human language, which also has its own biases in a way.
LF: Yeah, and that has a lot to do with how those LLMs are trained. I think they tried to be a little bit more idiosyncratic and the ways they represent information, even though, arguably, one of the ways we can tell if something is written by an LLM is that it seems formulaic, but not more so necessarily than something that has a preprogrammed script like an AI assistant. And the distinction there, I think—a lot of people have asked me about Large Language Models, since ChatGPT came out on November 30, 2022—I think the main distinction is that we expect a little bit more from these forms of AI. I’m sure that they were scary or that there were some scared responses to AI assistants as well. But the distinction has been about the potential for intelligence. I don’t think we’re quite there, despite what some people think that they have the potential for sentience. I don’t know that they pass a Turing Test, but I do think that there are different tests that we would subject ChatGPT to, as opposed to something like Amazon’s Alexa, precisely because we have different expectations of what we need from them, or what we would even use them for. I use ChatGPT to assist me in representations of information. It’s just a more interactive Wikipedia.
SR: Alexa told me the weather today, or it could order me a sSubway. But yeah, it’s a fascinating time. And I’m really looking forward to your work with Center for Digital Narrative where we’ll see you about 20% of your time. Or I guess maybe it works out to like 16, or 15%, or something like that, since you already work 100% and live in Canada. But thanks very much for coming on to the podcast, and I really look forward to seeing the research that you do both with the database, but hopefully also in interaction with the work that we’re doing on Extending Digital Narrative, AI and conversational interfaces.
LF: Thank you so much for having me.
Listen to the full episode of Off Center.
References
America Online. 1996. “ICQ”. https://icq.com/.
Decameron Collective. 2022. “Decameron 2.0.” https://www.decameroncollective.com/decameron2-0
Electronic Book Review. n. d. “Electronic Book Review.” https://electronicbookreview.com/.
ELMCIP. n. d. “Electronic Literature Knowledge Base.” https://elmcip.net/.
Fan, Lai-Tze. 2023. Reverse engineering the gendered design of Amazon’s Alexa: Methods in testing closed-source code in grey and black box systems. Digital Humanities Quarterly, 17(2). https://orcid.org/0000-0001-7271-2160.
Kairos. n.d. “Kairos.” https://kairos.technorhetoric.net/about.html
Microsoft. 1995. “MSN”. https://www.msn.com/.
OpenAI. 2023. ChatGPT [Large language model]. https://chat.openai.com/chat.
The Digital Review. 2022. “The Digital Review, issue 01.” https://thedigitalreview.com/.
Unity Technologies. 2005. “Unity.” https://unity.com/.
Urban One. 1997. “AsianAvenue”. (Currently redirected to https://blackplanet.com/)
This research is partially supported by the Research Council of Norway Centres of Excellence program, project number 332643, Center for Digital Narrative and project number 335129, Extending Digital Narrative.