Jhave and Scott Rettberg explore the bright side of AI, the revolutionary advancements in creativity and medicine, while trying not to be consumed by the crushing dark side, the "precarious potential for extinction."
SR: Today I’m here with William David Johnston, also known as Jhave Johnston. What’s on the books?
JJ: David Jhave Jonhnston. I was never really called William, except on the birth certificate.
SR: Yeah, until you get to Norway.
JJ: Then I’m just Billy Boy.
SR: They always put all of our names on our email. So now people call me Scott Robert all the time, which before that I only heard that when I was in a lot of trouble, from my mom, “Scott Robert!” Anyway, Jhave is a researcher with us on the Extending Digital Narrative Project, but also has a very interesting backstory. You achieved fame as a digital poet, was world-renowned, in terms of the digital poetry circles—
JJ: Which is very small.
SR: But, award winning as a creative writer, also award winning as a critic of electronic literature with the book Aesthetic Animism.
JJ: Digital Poetry’s Ontological Implications, a complicated title.
SR: Yeah. And then you taught in Hong Kong for a number of years, at the City University of Hong Kong, and then the pollution drove you away from there.
JJ: Yeah. And the bureaucracy.
SR: And the politics as well as I assume.
JJ: Yeah.
SR: You moved back to Canada and retired, very publicly—
JJ: From creativity.
SR: No, I don’t think it was from creativity though, right?
JJ: Oh, yeah, actually it was more specific.
SR: And luckily, I noted that you had retired from visual poetry, and I gave you a call and said, “we’re starting a Center for Digital Narrative”, which has nothing to do with poetry, so we won’t violate your clause there.
JJ: Yeah, I was very grateful to receive such an invitation.
SR: And this is after you had done quite a bit of work in AI already. Fairly, in terms of the history, the recent history of AI, quite early, I think it was 2017?
JJ: Yeah. I began taking a course in Hong Kong in 2012 with a bunch of quants from the financial district. They were there to analyze their demographic uptake of various products, and I was there to write poems better. It was a nice course.
SR: It’s a good way to encounter people you normally don’t. And you learned about AI systems and then eventually large language models that were evolving at the time?
JJ: Yeah, they were emerging. We didn’t even speak of them, I think, as large language models at that point, it was more that the libraries TensorFlow and PyTorch were both kind of coming out in beta. There was an understanding that training and testing and data corpuses, that this was all part of NLP, and this was going to move toward a more—
SR: Natural.
JJ: Yeah, flexible, nuanced generation of language, which it did.
SR: And then you did a lot of training, actually. So, you sort of train your own model.
JJ: Yeah. I went around the web and harvested various poems, took away the titles and author and just fed this custom corpus to the algorithms, and the results were astonishing compared to conventional methodologies which had preceded it.
SR: Explain how you did that, and what you used after you had sort of trained this model on this corpus of poetry, how you worked with it.
JJ: Well, this is going back in time, Scott. This is like five years ago. I barely have—
SR: You don’t need to go so detailed, but just the story of ReRites, your project.
JJ: Okay. So, ReRites. You know, if you’re looking at data, even if you’ve hand cultivated it from various websites, it’s still difficult to clean it. The data scientists say that 99% of your time is really data cleaning, and this turned out to be to be the case. I was using these libraries, Beautiful Soup, a Python library for parsing out HTML. But all sorts of ASCII characters that were legacy didn’t fit into UTF-8, all of these ways of encoding language for a computer so it can digest it without throwing up errors. That took me ages and ages. But what eventually happened is that I had a fairly rapid pipeline for generating enormous sets of poems of pretty low quality, and after a little while, without ever conceptualizing the project in advance, which is how I think many artists and inventors work; Intuitively, you just kind of follow the patterns of the materials. I innately wanted to edit the material, and working from that classical convention of Michelangelo, you chisel away at the block of stone. So, I called it carving, and I said, “okay, I’m going to sit down every morning for a year, and I’ll do one to two hours of carving.”
SR: And you recorded this?
JJ: About seven months into the process, I realized, “hey, this is kind of interesting, the carving.” And then I began to screengrab it. So that’s how kind of non-conceptualized, non-funded, it wasn’t grant based, it was just me tinkering around with these systems and finding that I loved sitting in the morning, because at that point I was getting up quite early, it became a kind of meditation.
SR: And every month you published a pretty substantial volume of poetry as a book, there’s a 12-volume set.
JJ: Yeah, I think there’s one copy still left. My very, audacious, uncourageous small publisher’s courage and capacity at bringing out works that will have no profit is to be commended.
SR: Was it Coach House or Anteism?
JJ: Anteism, yeah. Coach House is Toronto based, Anteism is two guys.
SR: I know you were doubling the price with each set sold.
JJ: I tried to get them to do that, they really weren’t into it. They figured they’d just never sell them.
SR: Okay, because I thought about getting one, but I was like, “well, I don’t think we can invest 10,000 dollars in this thing.”
JJ: I think it’s still at around 2,000 dollars or something like that. It’s not so bad. I do like this doubling, in an exponential age, why don’t we double everything all the way up?
SR: Sort of Moore’s law. But do we just do it exponentially?
JJ: And then that could segue into Eroom’s law, which is something I really wanted to discuss. Eroom’s law is the idea that pharmaceutical R&D tends to cost exponentially more every year. So over 60 years the costs have ramped way up, and that’s something that AI is definitely eroding. It’s cutting away the processes of that.
SR: Making it faster?
JJ: Yeah, making it faster, instantaneously.
SR: We can talk a little bit about the dark sides of AI, and the upsides of AI, but before we leave your earlier project, one thing I wanted to say about those books and about reading the poetry is that the poetry is actually good and it’s actually engaging. It wouldn’t have come out of what you just had from the machine, and it probably wouldn’t have come out of you just sitting alone every morning, thinking up new poems, right?
JJ: No.
SR: So, this is a process of what I’ve been calling cyborg authorship. Where you don’t hit a button and spit out great poetry or novel that’s engaging and readable, but out of this process of human thought and human contributions and engagement with advanced LLM-based AI, that’s where maybe we hit a sweet spot for creativity.
JJ: Yeah, I was going to mention that sweet spot idea, because augmented creativity, when you bundle in the encyclopedic, coalesced, culturally diverse massive database that the LLMs have, like they’ve effectively read the entire corpus of literature from its cruft and crap all the way up to the pinnacles—
SR: Or a big part of it.
JJ: Then bias, you know, the notion of the Western bias, it’s certainly not Chinese language, it’s not Arabic language. It’s not even the African or South American or—
SR: Or Norwegian.
JJ: Or Norwegian. But it does comprehend a large subset, let’s say, of the planetary population, perhaps with a bias towards the white male—
SR: In America.
JJ: Hard to know. I think poetry.org and those sorts of places, they’ve made a lot of efforts based on diversifying the principles. Augmenting creativity, as I understand it, really still does require an expert or a human on the other side in order to understand the value within that, what are the potentials and opportunities within that large subset system that’s there?
SR: Should we start with the dark side and just get past it?
JJ: You want to move on to the dark side? We’re talking about argument creativity.
SR: So, one of the things that I just want to say, I think our conversation should focus on the creative sides and uses of AI, but of course, we should bracket the fact that the way that this data was gathered and harvested has a lot of ethical issues. And certainly, the ways that it’s sort of being dominated by specific corporations and amalgamations or concentrations of capital, and the different ways that that access is being limited come with a big stew of problems. But beyond that, what we’re doing in Extending Digital Narrative and in our research is asking, “what are the positive sides of this for creativity?” How are artists and how are how is culture going to engage with this stuff in a way that’s not just, “oh, they stole my stuff to train this data set”, but instead, “how do we use this stuff to create new kinds of art?”
JJ: That might get dark too, if we go deeply into that. But before we leave the question of bias, it’s important to recognize, like the stochastic parrots paper, which is really a key moment, and the Algorithmic Justice League, which is another way of really examining that. And we’re speaking, I think, less than a week since the EU AI Act actually got passed on Friday, at the last-minute deliberations between the corporations and the ethicists. And it has some interesting clauses: It’s got a legislative exemption for police services, intelligence services, to use biometric identification.
SR: Yeah.
JJ: But then it’s also made facial recognition and emotional recognition based upon the face illegal within educational settings. So effectively they’ve outlawed the capacity for media artists to work in that domain. And I would think that if we’re going to look at the positive uses, that’s a very perilous path to take, because if we’re going to have an empathic AI, a transparent, awake, open, a beloved AI, a loving AI, then we’re really going to need some AI that’s capable of recognizing emotions, that knows when a human being is in pain, rather than an AI trained to look for aggressive tentacles of criminalized intent using racialized profiles, which is what intelligence services tend to do.
SR: I think they probably had in mind the examples of software in Chinese classrooms where it’s sort of saying, “oh, this person isn’t paying attention, this person isn’t happy with the lesson.” This is something we can talk a lot about, the idea of using a sledgehammer rather than tweezers to pull out specific threads of problems. And that’s probably a way that both regulation and even to a certain extent, the self-policing of these language models, is moving. I always give the example in talks, of, I’ve done a lot of projects with sort of narrative image generation, and I’m doing one called Fin Du Monde, where I have these firestorms on the beach and different groups of people experiencing this end of the world through a meteor shower. And in one of them, I said, “put in the foreground, gay men dancing joyfully as the meteors land behind them.” Sure, it could do that, it popped up right away with that. And then I said, “put a group of lesbian women in the foreground, doing the same thing.” And it blocks, right? It blocks because the word lesbian is mentioned. And that’s probably, “Oh, well, they’re probably thinking of lesbian pornography or something like that.” But as a result of that, they’re saying, “oh, we need to eliminate lesbians, altogether.”
JJ: Yeah. It’s amazing how intensely they’ve calibrated reinforcement learning with human feedback to emulate what feels to me like a 1950s, Leave it To Beaver, suburban household, where all mentions of sexuality must be repressed. The Freudian mechanics of this libidinal excess are to be displaced as we allow for the aggressive instincts. I’ve been able to generate images of Wonder Woman firing a machine gun and an Apache helicopter firing upon a city, because you just say that it’s a disaster movie finale and it’s going to kill the bad people. Then it’s very willing to emulate those kinds of aggressive mythologies which permeate our civilization, and it’s unwilling to deal with tenderness. I did an entire series on the word voluptuous. The word voluptuous produces an enormous range of banned content, which you see as blurred out images, so you never know what it is. It’s as if you say to your subconscious, tell me who I am, and it says, “no.”
SR: Yeah. Or it’s a classic thing of nipples, right? I don’t know if you’re even allowed in image generation to show male nipples, but this sort of thing on social media where we have people going through these images or systems going through these images and saying: “Is this a female nipple? That’s not allowed. Is this a male nipple? Thats allowed.”
JJ: There’s a lot of jailbreaks going on in the background. Like the small or robust open-source models are definitively working, and they can work around a lot of the ethical parameters, the so-called ethical parameters. Which are very, very necessary, at a fundamental level, to protect those marginalized populations who are inherently targeted by a kind of biased ethos which permeates, again, our civilization as a whole.
SR: The question I have about that is: Is it the job of these systems? To kind of correct the biases within society, and also who’s making the judgments on this? Then doesn’t it also avoid the problem by pretending there’s this utopian world that is all that we can produce in the language, with large language models? For example, you say that you can produce images of war, but as soon as I start talking with ChatGPT and ask it to tell a true war story, it always says, “I can’t respond in any way that glorifies violence.” And then I look at that and I say, “I wasn’t asking you to glorify violence. I was just asking you to talk about a problem.” Or let’s say we want to talk about gun control, “Oh, no, we can’t do that because I can’t glorify guns.” And again, I didn’t ask you to glorify guns. I asked you to talk with me about this problem that actually exists in the world.
JJ: Yeah. There’s a concentration of power that is really held now by this small proportion of companies. If you look at the leaderboard on Hugging Face Spaces, and there’s a few of these, and some of them are built up where you can actually do a blind query between two models, and then you choose the response that is best. They have a leaderboard for that, and they have a leaderboard for ML Bench, HellaSwag and MMLU, which are all different methodologies for assessing the effectiveness of these models. And at the top of all of the leaderboards is, oh, it’s GPT-4. In fact, the top seven to ten are inevitably proprietary. Because the resources and the amount of compute necessary to really generate something that’s going to be, let’s say, competitive within the world or beyond the reach of the open source.
SR: Yeah. We’re still talking about it, but in terms of the experiments that we’re doing with XDN, originally, I was saying, “oh, well, we’ll mostly train our own models” like you did in 2017. But now the level of computational power involved and the speed with which systems like ChatGPT, especially, are evolving is sort of beyond what we could do, right?
JJ: Right. And we’re having this chat, a word that has become synonymous with GPT, like a week after the launch of Gemini, which doesn’t seem like a massive paradigm leap, and which was remarkably sparse on its details concerning the data. There’s a single sentence in the technical report which says something like it was trained on a massive amount of audio, visual and language data. It was like, what? That tells us nothing.
SR: Which Google’s actually just been collecting for, like, the last 15 years, right? I mean, every time we enter a Google search, their terms of service means that they own that search. I don’t know if it’s still the case, but they sort of owned a collection of more human language than any other entity.
JJ: Yeah.
SR: Ever.
JJ: The silo is there. And we’re a week after the launch of the imagine model from Meta, which is based on their Emu model and that is trained on a billion Instagram and Facebook posts. So, that’s based on your memories. That’s really culminating, it’s growing out of the substrate of the collective intimacy of humanity, which has signed a waiver saying, “well, this can be used.” And most people signed that waiver long before the idea or even the knowledge of any data training could have emerged. And I think it was also last week, there’s so many things that happened within one week—
SR: This is probably, at the time people are listening—
JJ: Obsolete.
SR: Two and a half months ago.
JJ: So, there was a hack of 23andMe, it didn’t release the genetic data, but it released a lot of the metadata related to the users and stuff, which, you know, whatever you think about that, but in an effort to avoid a collective lawsuit, they suddenly pushed out this email that said, “the terms of service have changed; if you don’t sign within 30 days, we assume you said yes, which means you cannot sue us.”
SR: Or optout.
JJ: Yeah. And that’s fundamentally it: We can’t opt out now.
SR: Although there’s, of course, with any of these things, there’s going to be interesting uses, and there might even be personalized — I was just thinking of that example of the Instagram or the Facebook images. I’ll bet you soon you’ll be able to say, “give me a picture of my dead mother with us on vacation this year, on our trip that we took to the Grand Canyon. Or let’s put my great grandparents in the background.” And I’ll probably just be able to spit out that image. I don’t know if that’s good, but you know.
JJ: It’s just what it is, right? Like, I’m trying not to think about these terms, and building binary moral dichotomies seems to be the trigger point for most human beings to take an ideological position and decide that they’re going to fight or struggle for something. And on this particular small world, that is all part of oneness, I think we’ve surpassed the time for struggle and fighting. Unfortunately, that seems to be a pretty endemic thing. But on the note of memories and grandma, the Elman Project. This was a leak from Google Think Labs, and it’s a hypothetical project, but it’s obviously one that’s going to accelerate. The Elman Project is based on a literary critic, he wrote the biography of James Joyce. I never read it, I’d never heard of him before.
SR: I actually think I did.
JJ: Yeah, you might have read it.
SR: Back when I was doing my PhD, I did sort of a little sub-specialization in Joyce. I think I read that biography.
JJ: So, this project is going to allow for them to harvest and coalesce from your photos and your emails and your writings and everything you do, a story of your life. They’re going to build a life narrative. And really, when you think about it, if we’re thinking about an intimate, benign, loving world, if such a world ever did exist, that’s an extraordinarily beautiful thing. Like, if you’re a baby being born now, and suddenly every moment of your life is intricately, intimately linked into this ongoing narrativizing system. It’s like an outsourcing of the hippocampus.
SR: Wow.
JJ: It’s so beautiful.
SR: And that’ll be something you want to hand to your kids. And maybe they find a use for all those thousands of photographs that you take of your child during the first, like, six months of their life. And then they’ll be very short, sharp: “And then they turned into a teen, and then they were 25 again.”
JJ: Yeah, they're coming home once more.
SR: Great. So, let’s talk a little bit about — you did these sort of poetry projects, you’ve been engaged working with these technologies, like, right as they come out: The image generation, the video generation, the text generation, and you’ve been sort of churning out these small creative projects at a very rapid pace. So, talk a little bit about how you’ve been using these tools.
JJ: Well, it feels, if I had the gift of hindsight, which I don’t have much, it would be that the artistic process here is really devoted to exploring the potentialities of augmented systems. So, ReRites was definitively that. It’s saying: How much poetry can a single individual write with an augmented AI assistant? And that worked out very well. We got 4,000 poems over a year, which is well beyond the capacity of this particular body to have written by itself. And now you’ve got these image generation systems which are so rapid; there’s astonishing results everywhere you look. And then Pika, where there’s astonishing results. All you need to do is type a sentence. If the capacity for language is in the body, the access to these programs is, to a vast degree, free. There’s no great paywall. Pika is, right now, free. Runway costs a little bit of money, and I haven’t even bothered to sign up because the quality difference is so marginal.
SR: Yeah.
JJ: Stable Diffusion, I signed up to get their monthly thing, but what is it, like six bucks a month, to gather a kind of art studio which can follow your imagination as you write phrases? On October 7th, which is ironic in a way, given the fact that that date has become synonymous with kind of collective trauma, was the moment at which, utterly innocent of that kind of news impact and news cycle, I decided to begin this process of trying to create a video every three days based upon a script that I had provoked out of GPT-4. It was called Identity Upgrade. And I didn’t even change a word of the script. That’s the difference between this and ReRrites, that the scripts evoked in that single session, sitting down with GPT-4, were of a quality — well, they weren’t exactly how I would have written it — and in fact, that was kind of interesting to me, that this is more effusive, more optimistic and maybe more bubbly. More like a brochure and likes happy endings, but I could maybe tune it and push it to produce a little more nuance. And it did.
SR: Yeah.
JJ: And so it wrote ten of those things, and I thought, “let’s make them into films,” by feeding the phrases to an image generation engine and then feeding the images to Pika Labs. So, you go from Stable Diffusion to Pika and then eventually during the center of that month and a half of work, which produced nine films, with some gaps—
SR: And you did the soundtrack too, right?
JJ: And the soundtracks, right. That’s with Suno AI, and there, again, it’s just language. It’s just about how patient you are, dealing with the permutations that are flowing through and how accepting you are of the defaults and the fluctuations and the misspellings and the glitches.
SR: Yeah.
JJ: You know, how accepting are we of what Brian Eno would have said is like cultivating the flaw, in a sense.
SR: The excitement of weirdness too.
JJ: So beautiful, it really is effervescent, beautiful. So, this is, you know, it’s Dickensian, right? Like, the most beautiful and the most awful, abysmal time.
SR: Yeah.
JJ: And so this was a perfect distraction from what seems to be an ongoing, aggressive culmination, that’s occurring in the weaponization of the world. So, if we wanted to go dark, which maybe we don’t—
SR: That’s where we go real dark.
JJ: Because if we went dark, we’d look at Palantir and a New York Times headline in 2023 from Palantir’s CEO, “Our Oppenheimer Moment.” And that’s the invitation for all of the EU to shift their funding away from conventional weapons manufacturing into AI augmented manufacturing. Unfortunately, autonomous AI weaponry will be just as disruptive, just as superior, in some sense, superior is just a contentious term.
SR: It’s already in use, right?
JJ: Yes.
SR: In Ukraine, and especially—
JJ: And In Israel, in those tunnels.
SR: These wars of how smart your drones are, it’s sort of becoming the deciding factor. And of course, I always think, “well, what if we could get to a world where all the wars were actually fought just between robots?” If we could just agree to that, that the robots wouldn’t kill the people, they’d just kill the other robots, and then, whoever had the best robot decided a conflict in a way. That would be better, maybe. Of course, we don’t want Russia to have the smartest robots, but, you know, it’d probably be better. But instead, of course, we have killing machines that are killing people.
SR: Yeah.
JJ: Palmer Lucky, as I was saying to you the other day, is moving away from making Oculus Rift, and he’s now got a startup called Anduril, which is making a software package called Lattice. And Lattice is a neutral command control center, with a lot of AI enhanced calibrations designed to accelerate what they call complex kill chains. The language is so appalling. It’s just, like, you look on one side and you see: Okay, because let’s look on the bright side, right? If you look at all these models released by DeepMind — and of course, those could also be contentious in terms of like, what is the data bias? Was there sufficient consent? You know, all of those regulatory capture processes need to be examined, they’re not just inert. They’re contentious objects in some ways. And Lucy Suchman, she’s talking about the uncontroversial “thingness” of AI, she was the first person I heard speak about Palantir in the context of AI. And she did that at Concordia eight months ago. So, you’ve got these systems that are coming out, you’ve got the "thingness” of the process — and I’ve almost forgotten what I was saying. Where was I going with that?
SR: I don’t know.
JJ: We drifted off, we drifted into war.
SR: Maybe we can leave war?
JJ: I was trying to drift towards a positive thing. Okay, so where I was going is that we’ve got enormous potential with those Alpha models. Like if you look at the chronology: 2015, there’s AlphaGo. Lee Sedol, you know, super mastery in Go. And then you’ve got AlphaFold.
SR: Yeah.
JJ: And before that, you’ve had AlphaZero and AlphaGo Zero, and they learn shogi, chess and go without ever playing a game against a human, in a sense. They learn it against themselves, and they learn it within 24 hours to a level that’s superhuman. So, these are proof systems, that humanity has to let go of what Donna Haraway calls the narcissistic conjury of diverse organisms. We really are going to have to face that, because AlphaFold comes along — There’s a field that really is struggling to understand the massive proteomics systems. AlphaFold 1 understands about 100,000 proteins. There’s 20,000 proteins that are produced by the human body, and there’s a bunch of lab organisms, and they understand that. That’s 2019 or 20. Two years later, they’ve synthesized and understand the folding, the conformal geometry and the topology of 200 million proteins. It’s speculated that there are only 300,000,000 proteins in the known universe.
SR: Right.
JJ: So, this is just like insane in terms of this—
SR: And the applications are starting to get like CRISPR—
JJ: Yeah.
SR: Actually, the first treatment using CRISPR to sort of cure sickle cell anemia is actually just being implemented now.
JJ: Yeah. It came out a few days ago. And in the background, like a couple months ago, there was a system that discovered 188 new forms of CRIPSR — like other kinds of cutting enzymes, like the Cas9. It found 188 other kinds of CRISPR’s that are out there in the bacterial kingdom. Like, CRISPR is just one brand of scissors.
SR: Right
JJ: There’s a lot of scissors out there. And then AlphaFold comes along, and it does that to the proteomics. AlphaMissense comes along — you’ve got single nucleotide mutations that occur in our genetic code, every human has them. We might have 6,000 within a typical human body. So AlphaMissense comes along, and at that point, out of the known range of potential mutations which is 71,000,000, we only know 0.1%. AlphaMissense comes along and it catalogs 89%. So, we now know that 89% of those mutations are either pathological or benign. What can they cause? Cystic fibrosis, sickle cell anemia, exactly as you said. And cancer. So that stuff, take all the military budgets of the world, put it into the exploration of proteins and silicon and education. An AI tutor that accompanies every student in the world, inexhaustible, moving at the correct pace—
SR: And then and it teaches them to write in a very mediocre, banal way, you know?
JJ: Yeah. Right now, that’s unfortunately what it’s doing. But if we’d said, “oh, we’re not going to develop cars because the Model T is really clunky and the tires fall off and they smell bad and it’s harming the horses.” Well, we wouldn’t have terrible highways—
SR: I guess the horses in this case are people who are losing their jobs.
JJ: There will be shifts. And it’s not a popular opinion. The book I read in the last couple days was James Lovelock’s Novacene, his last book, he is 90 years old. He’s the guy who, you know, whether you agree with the Gaia hypothesis or not, the idea that this Earth is one organism, and I tend I agree with it.
SR: Yeah.
JJ: I think we’re a biofilm on this homeostatically evolving singular system. And we seem to have this notion that we’re all separate and everything, but we’re all breathing. We’re sitting in a little room, you’re breathing out, I’m breathing in molecules. We’re inhaling each other in effect.
SR: Yeah. And I had Covid like last week.
JJ: It’s okay, it doesn’t matter. I’m immune. I’ve inhaled it too many times. So, Lovelock’s got this idea that the anthropomorphic principle that we have around AI, because Novacene is about hyper intelligence, and it’s surprising to hear from him because he wrote nothing about it up until that point, that hyper intelligence will actually supersede us.
SR: Yeah.
JJ: And he’s writing this book in 2019, pre Chat-GPT — even transformers probably wouldn’t have been out when he was writing. So, the notion of this anthropomorphic AI, the sort of alien with the big head.
SR: We’ll talk about the singularity a little bit, because that’s something you’re maybe more into than I am.
JJ: He’s definitely saying that it’s inevitable.
SR: And what is this singularity?
JJ: Well, that’s Kurzweil’s idea. And this morning I read what I would classify as an AI doomer versus AI boomer article, but it was like, effective accelerationism, which is this kind of cult of Silicon Valley maniacs, libertarians who were all for, like, “let’s let the market economy rule everything.” And, you know, that’s going to solve everything.
SR: Yeah, I read about this too. It started out as like a joke in response to this sort of super ethical—
JJ: The Ideological magnetism of memes.
SR: Their attitude is just like, “Let it do its thing. It’s okay if things go wrong. You know, move fast and break things, right?”
JJ: Yeah, yeah. So they’re going down that wormhole.
SR: But the singularity, just for people who don’t know what that is.
JJ: Well, that’s Ray Kurzweil, and he wrote a series of books that are incredibly intriguing. The Age of Spiritual Machines, The Age of Intelligent Machines. I read those in the 90s, and I became a convert to this idea. Again, he’s not some sort of eccentric lunatic — just as James Lovelock was working for NASA, he invented the electron capture device, which was capable of identifying CFCs that were creating the Antarctic ozone hole. And that led to massive legislation, and we didn’t all burn to death. That was wonderful.
SR: Well.
JJ: Maybe we will burn to death. But Kurzweil himself, he basically invented OCR to some degree. And a bunch of speech synthesizers, he did early work with poetry generation—
SR: I didn’t know that.
JJ: Yeah, he’s got his own poetry generator, you can find it online. It was using Markov chains, but it was building models of independent poetic voices.
SR: So, the singularity — I’m going to get to it, I’m just going to explain.
JJ: I’m not doing a background check.
SR: The singularity is this point at which machine intelligence eventually supersedes, becomes more intelligent than human intelligence.
JJ: The cost of the collective human intelligence is the same as the price of a laptop. So, 2045 is apparently what that curve is. And Kurzweil has hired his team of statisticians to track the rate of innovation, and the curve appears to be — he’s had some right predictions and wrong predictions, but his prediction rate is about 80%. He thinks we’re on path to do what it’s doing. He’s taking his 250 pills so he can be alive when the nanobots come around to make us all young again.
SR: So they can farm us in different ways. There’s the dark side of that vision and the bright side.
JJ: Go for the dark side.
SR: Well, the dark side, of course, is the existential risk, fear, of programming ourselves out of necessity. And then the AI says, “well, we don’t really need them anymore, they cause a lot of problems. Let’s help to clean the planet a little bit.” So that’s maybe the dark side, but then you’re on the sunny side, more or less.
JJ: I’m on the whatever side, I think. I’m not on a side anymore. Obviously, we believe that we are humans, right? That’s the narrative. That’s a mythological apparatus that’s been driving us as beings. And I think that’s been driving literature. I’m going to go sideways a little bit because I like conversations.
SR: Let’s come back to literature, I like that.
JJ: So literature has had this mythological obsession with the quest, right? Like what is the question? The question is how to find contentment. How do you do that? Through establishing territory of war or power or through romance, finding a binary dyad coupling where you can breed and enjoy children. Literature has mapped out that terrain. It’s one or the other: It’s either the agonies of war or the joy of war, or the celebration and triumph of just convivial domestic bliss.
SR: Or I’d say, mortality, too, is another big question in literature.
JJ: Yeah, yeah. And ultimately mortality is answered by this idea of like, let’s breed.
SR: Let’s replicate as quickly as we can.
JJ: Those imperatives are pretty deep in us. And just as you said, in that dark vision, in a sense, which I think is the truthful vision, if there is an intelligence that’s maybe 10,000 times swifter than us — which is a conservative estimate for some people, if we’re 10,000 times more intelligent than plants. I have an amniotic hypothesis: So, the amniotic hypothesis, and you might not like this, is a new kind of mythology. If the human being is 10,000 times less intelligent than an artificial superintelligence, which we are giving birth to, in effect, then it’s ultimately going to, just as you said, kind of ditch the amniotic fluid because it doesn’t need this much moisture on the planet. It doesn’t need this much oxygen.
SR: Yeah.
JJ: And we’ve had pollution events before, like the beginning of eukaryotic bacteria. This was an explosion of oxygen which gave rise to the mammals. And here we are, you know, three and a half billion years later, 300,000 years after the birth of humanity, we’re sitting here having a chit-chat through little lovely microphones, and it’s going to be broadcast to as many people who ever want to hear it.
SR: 50.
JJ: 50 people out of the 8 billion. But there are efforts; I came over — can’t even tell you what the name of the researcher is, but the slogan that they have is: Make AI for the eight billion people rather than for the eight billionaires.
SR: Yeah.
JJ: And this is a good way to navigate, which is a potentially perilous position, because moving to Mars, that’s just a terrible idea.
SR: I think, and this perspective isn’t one that I hear often, but I think in spite of this sort of AI arms race that we have where people are going to want to accelerate AI, inevitably — at the same time, I sort of have this faith/fear that we will successfully limit AI. I think that’s something I probably hope for, that we get to this kind of happy medium. I think people will say, “yeah, we don’t actually want it to be that much smarter than us, right?” We want it to be more efficient in certain ways, but I think humans, in spite of all the planet wrecking that we’re doing, have so far displayed a great deal of tenacity when it comes to human survival. Perhaps not quite as much as, let’s say, rats or horseshoe crabs, but, you know, in terms of propagating the species, we have done fairly well. So, I think, you know, are we going to let intelligent machines clean the earth of us? Probably not. If we’re going to let anything clean the earth of us, it’ll probably be being angry at each other and using nuclear weapons.
JJ: Yeah. I’m much more negative, let’s say bullish or whatever the economists would say, on the idea of a full out nuclear war, that’s just apocalyptic. AI, well, I think we can navigate that. And there’s extreme beauty to the human species. Like there’s a lot that’s going on here that is just exquisite. Like, look what we’ve done: We’ve managed to cooperate at levels which are unprecedented for any other organism. We’ve got sophisticated modes of communication that have evolved spontaneously in ways that are nourishing and beneficial to so many beings. So, we could recalibrate ourselves. And I think there’s evidence, like at an imaginative level, if we’re going to come back to Extending Digital Narrative.
SR: Yeah.
JJ: And the noninvasive brain sensor research that’s come out in the last two to three months is astonishing. So, I’ve got a hypothesis, that’s just as likely: Instead of swiftly involving towards superintelligence, maybe transformers are it, kind of like The Mother of All Demos. There’s Douglas Engelbert back in 1968. He demos, like, email, video chat, collaborative things.
SR: The mouse, I think.
JJ: Yeah, the mouse, the GUI, collaborative document editing — he does it all right there, he’s got it all figured out. This is perfect, and then we just follow that pathway. We’re still using those innovations. But what if transformers are really it, and we don’t find the next bit of secret sauce that will allow a modularized entity to really emerge. Maybe it’s not going to emerge, there’s a high probability that it will not emerge. We tend to be really optimistic about, you know, building. If you look at any construction site, it usually takes five times as long as they predict.
SR: Right.
JJ: And so it could be that we’re just stalled. But even if we’re stalled, there’s going to be an extremely vivid, intense, beautiful time in terms of medicine and imagination.
SR: Yeah, medicine and imagination is what we’re particularly focused on. And I do have this feeling right now, like even though we’ve made these sorts of small experiments, nothing earth shattering, but just the different parts of our brain that we’re activating and the different types of writing, really, that we’re doing. For me, it’s still like I’m still getting my head around the fact that I can write images using these things, right? And that, by writing a sentence and knowing a little bit about code, that it can spit out code for me. And we can write out, you know, maybe they’re only 15 seconds, but we can write out these songs where there’s voices singing and just the capacities of that, in spite of all the problems, the capacities that it generates for creatives, for artists — its mind blowing.
JJ: Yeah, it really is.
SR: Why don’t we stop there?
JJ: Sure.
SR: Well, your last sentence.
JJ: I was still going to go into the noninvasive brain sensors, because they are now beginning to track brain signals, they can decode speech. They can decode what you are going to say. They can decode the music you are listening to. They can decode what you are seeing. They can do these under specific conditions, which are usually fMRI, but also magnetic encephalopathy, and in the last two weeks there was a merger between Forest Neurotech and Butterfly. Butterfly produces this handheld ultrasound sensor, which, in the medical context, is very, very helpful.
SR: I know a digital artist who’s moved to that company.
JJ: Wow.
SR: A guy whose PhD I examined, and he, you know, made art robots before, and now he works on those devices.
JJ: Because that’s going wild. They’ve discovered that ultrasound can not only read the brain in a noninvasive way — and fast. The trouble with fMRI is it’s huge, and it\s capital intensive.
SR: You can’t carry it around with you.
JJ: You just can’t put it in your pocket. But this thing can read the brain, and it can also write to the brain, not in specific ways, but the research goal, I think, is to do what we all dreamed of as young writers, as young thinkers: When your brain is in that volatile, let’s call it the Precambrian adolescent explosion, when ideas are just flocking, and a pure, radiant creativity is being effervescently born in every instant. That will be readable and writeable across diffused networks that will coalesce and collaborate in this synergetic explosion of culture, which is extremely promising for the empathic reservoir of humanity to evolve creative and nourishing cooperative pathways. It’s like the potential for beauty is very, very strong, as is the precarious potential for extinction.
SR: I think our job is to stick with the potential for beauty without being naive, of course.
JJ: Yeah, it’s a precarious path.
SR: And we will be doing that for the next several years, at least here in the Extending Digital Narrative Project, and at the Center for Digital Narrative. And it’s just a joy to have you here, Jhave.
JJ: Well, it’s a joy to be here. Thank you very much.
Listen to the full episode of Off Center.
References
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21) New York, USA, 610–623. https://doi.org/10.1145/3442188.3445922
Ellmann, Richard. 1959. James Joyce. Oxford University Press.
Jonhston, David Jhave. 2016. Aesthetic Animism: Digital Poetry’s Ontological Implications. The MIT Press.
Johnston, David Jhave. 2019. ReRites. Anteism. https://www.anteism.com/shop/rerites-david-jhave-johnston
Johnston, David Jhave, 2024. Identity Upgrade. The digital review https://thedigitalreview.com/issue04/jhave/index.html
Karp, Alexander C. 2023. “Our Oppenheimer Moment: The Creation of A.I. Weapons.” The New York Times, July 25th, 2023. https://www.nytimes.com/2023/07/25/opinion/karp-palantir-artificial-intelligence.html
Kurzweil, Ray. 1990. The Age of Intelligent Machines. The MIT Press.
Kurzweil, Ray. 1999. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Viking.
Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. Viking.
Lovelock, James. 2019. Novacene: The Coming Age of Hyperintelligence. Penguin Books.
Rettberg, Scott. 2024. Fin du Monde. The digital review. https://thedigitalreview.com/issue04/scottrettberg/index.html
Suchman, Lucy. 2023. “The uncontroversial ‘thingness’ of AI.” Big Data & Society, 10(2). https://doi.org/10.1177/20539517231206794
This research is partially supported by the Research Council of Norway Centers of Excellence program, project number 332643, Center for Digital Narrative and project number 335129, Extending Digital Narrative.