Can we again devise bots, in the tradition of Nigel Leck's AI_AGW, capable of staving off environmental disaster and saving humanity from its own stupidity? Twitter has censored this particular bot that Leck created ¨with the singular mission of hunting down false claims made by climate change deniers, calling them out, and correcting them with information linked from peer-reviewed essays in scientific research journals.¨ With this interview, ebr and our Natural Media co-editors celebrate and memorialize this noble, though brief realization of the critical and creative potential of community built digital media.
In 2011 Lord Christopher Walter Monckton, Viscount Monckton of Brenchley, got into an unlikely online brawl about climate change. Although his title makes him sound like a cartoonishly sanctimonious villain in a Thackeray novel—one images him seated next to the Lady Jane Sheepshanks—he is sadly all too real. Monckton was and is an inveterate climate change denier, a propagator of spurious science who slings his absurd prestige about like a shillelagh. But this is also what made his Lordship's cyber-confrontation in 2011 so comical, for his adversary was equally absurd. Composed of neither flesh nor blood nor subject to the whimsy of the peerage, his foe was, instead, a piece of computer code. More precisely, Monckton's rival was a Twitter-bot named AI_AGW. The bot, whose name stands for "Artificial Intelligence: Anthropogenic Global Warning," was designed for the express purpose of combating false assertions made by climate change skeptics, among them in prominence his Eminence, the (third) Viscount of Brenchley.
AI_AGW debuted in 2008. As its name suggested, it was—at least technically speaking—an "artificial intelligence." But it was no Singularity, no Skynet. It did not even approach GladOS' wickedness of depriving one of cake. It was, instead, something far less grand and far more obnoxious: a Twitterbot. Its avatar was a spinning globe, decked out with a colorful array of atmospheric cummerbunds. At its center swirled a glowing red eye, watchful, serious, and full of admonishments (Figure 1).
We love to hate them now—and, indeed, Twitterbots, especially in light of the 2016 election cycle, have earned our loathing—but this bot, in retrospect, made healthy contributions to the toxic micro-blogging ecosystem of 2008. Indeed, to use the contemporary parlance of the environmentally-minded, AI_AGW was a "benevolent pest" that helped de-pollute the Twittersphere of "fake news" about climate science.
Would that we had it back.
2008 was well before the Russian election bots' onslaught against American democracy, well before marketing "agent" bots became conversationally fluent, and well before Microsoft's AI Tay learned to be racially biased. In retrospect, one remembers AI_AGW wistfully. Ten years ago, when bots were still novel and regarded with some measure of awe, AI_AGW pursued a virtuous quest. Far from working to weaken our Democracy, sell us pornography, hold our businesses hostage with ransoms to be paid in bitcoin, AI_AGW attempted to stave off environmental disaster caused by anthropogenic global warming, by saving humanity from its own stupidity.
Coded by Nigel Leck, a software developer in Australia, the bot debuted with the singular mission of hunting down false claims made by climate change deniers, calling them out, and correcting them with information linked from peer-reviewed essays in scientific research journals. Sadly, Twitter shut down this noble creature, on account of complaints made by people like Monckton, who claimed that the bot violated Twitter's automation policy. Although AI_AGW is no more—long live AI_AGW!—I had the opportunity to chat with Nigel Leck about his creation over a Skype session and email interview, shortly after the bot got nixed. The following is the edited and compiled transcription of our conversations.
LS: AI_AGW has been suspended.
LS: What happened?
NL: It got a bit out of control.
LS: Well, before we talk about its demise, tell us how the bot came to be in the first place. What's its origin story?
NL: I got a bee in my bonnet about going over the same thing over and over. When you go and start arguing about climate change, you say yourself, "I'm just repeating the same answer."
LS: So what do you do?
NL: You get some peer reviewed documents. Skeptical Science is a good site for that.
LS: Skeptical Science: Getting Skeptical about Global Warming Skepticism. https://skepticalscience.com/about.shtml
NL. It has prepared answers for these same arguments. You go to the skeptical science, you say, "Well, here are the top 100 or 200."
LS: Arguments and rebuttals?
NL: Yeah. Like, a skeptic will say, "The heat follows the sun." He'll give you a chart that seems to prove it and you'll think, "Oh, that's really quite convincing." But someone on Skeptical Science will say, "That chart ends at 1997. Why? There's plenty of data after '97, and here it is." And then the bot got smart enough to recognize these false claims and respond to them.
LS: Smart enough?
NL: There are a lot of voices out there. You have to put each one of them on one side of the spectrum. There was a logic to working out where they were. The bot was learning from the good people and learning from the bad people too. It had to try to work out which camp they were in. Then it would say, "Okay, well, I think I've got an answer for that." But it was just doing a pattern match search. If someone from Professor Brian Cox's camp triggered a rebuttal, you would instantly say, no, that's obviously not a match and we'd reverse it, put it in a negative match pattern. And the bot just automated the process.
LS: So the process was pretty straightforward?
NL: All it was, was basically this: here are the arguments and these are the answers that I've worked out— the answers that I started to give. The bot does a keyword search or the climate denier triggers it by directly challenging science. When they're responding to the bot directly, the system just reads through and finds the appropriate match.
LS: What did you use to author the bot?
NL: I just coded in Java.
LS: Are you willing to share your code?
NL: No, sorry. A number of people asked, but they wanted to use it for the opposite intent. Happy to send bits of the bot that would be useful in themselves but not really the bot itself. There were a number of people asking to buy it, and I think that was really why it was shut down in the end. Some people thought I had stepped over a line somewhere.
LS: Any backlash from Twitter or other organizations? Any personal vendettas from angry AGW deniers?
NL: Yep, a few death threats from people that wanted to invade Austria for some reason 😉 and some lovely post cards from people that were sick of arguing with the deniers themselves.
LS: So, let's go back to the bot's sad demise. Twitter says the user has been suspended. What happened?
NL: It started to take on a little bit of a life of its own.
LS: I followed an argument that occurred between the bot and some guy talking about Al Gore and cow farts, and it was just ludicrous.
NL: Or Alex Jones would say something. But the funniest was when Monckton—
LS: "Lord Muck-ton."
NL: He came onto twitter and the bot gave him both barrels. AI_AGW had all of Monckton's well-known arguments (and answers) pre-loaded with high confidence matches. Monckton would say something and then the bot would respond and he would respond to that and it would respond to him and so on.
LS: Did he ever realize that he had been punked by a bot?
NL: I don't think so. He was on for a half the night and I think he left and—no. It had a lot more stored up to talk to him about. Let's say that much.
LS: It's brilliant.
NL: Yeah, you're walking a rather tight line, but if it pissed of Monckton—well, I slept quite happily that night. That was okay.
LS: Could AI_AGW detect sarcasm? Has that been something you've worked on/out?
NL: The easiest method was just to approve the engagement by working it out at the start. It was smart enough to learn from the "good guys" what a false positive was and exclude them.
LS: how did you decide upon the "face" of AI_AGW?
NL: The red eye was HAL from 2001, but I did want to make it very hard to work out that it was a computer.
LS: @AI_GW was originally called "Turing Test"? Why the name change?
NL: "Turning Test" was a little too easy to guess.
LS: I have found information about AI_AGW on the Huffington Post, RawStory, Gizmodo, and MIT's technology review (links are below), but I haven't been able to learn how the story broke. Who first picked up the story?
NL: It was Christopher Mims <https://en.wikipedia.org/wiki/Christopher\_Mims\> that originally contacted me.
LS: Are you aware of any other bots doing similar things? I am a fan of the RiceMaker bot, myself.
NL: I'm sure there are; I just started it as an aid for myself and it sort of got out of control.
LS: Has the bot evolved over time?
NL: I did want to sit down and write a new improved one that used a Darwinian algorithm (improved arguments based on success rates) to argue with creationists, for no real purpose, but it would amuse me no end.
LS: Did you keep a record of how many times AI_AGW engages?
NL: I did have this, but it's long gone now.
LS: You're a full-time programmer. Was this just a side project for you?
NL: I don't know. Maybe it was more. You don't like it when you follow a false claim down. You go, "Hey, you're lying to me." You don't like it when you've gone down a path and find that out. You think, at first, "Okay, that makes sense. The sun flares do seem to correlate with that increasing temperature." But then you follow that path all the way down, and you say, "Oh, are you buggers. You lied, because you knew that you selected that chart so that it would end at that time and you knew that the data that went past that time was contradictory." And when you work that out and realize you've been lied to you go, "Oh, okay, no, I don't like that." So that's when you push back a little bit. That's when you suddenly say, "I'll spend a bit of time to push back against that."