FOR FREE PEOPLE

FOR FREE PEOPLE

(Jon G. Fuller via Getty Images)

Why No One Can Control AI

Former Microsoft and Google engineer David Auerbach says Big Tech gurus are right to be frightened of their own creation.

You can blame David Auerbach for the smiley face emoticon. He invented it when he worked for Microsoft on its messenger service in the late 1990s, fresh out of Yale.

Auerbach had always been precocious. He was the first kid on his block in 1980s suburban Los Angeles who had his own computer, an Apple IIe, which he got when he was six. His parents, both psychiatrists, pressed science fiction paperbacks into his hands. Their house was strewn with Prozac- and Zoloft-branded pens and mousepads. When he was 13, he read all of Vonnegut in two weeks. The first program he wrote was a single line of code: “Draw a square.” By age eight, he was writing code to create long animated movies in Logo

“Computers and coding made sense to me,” Auerbach tells me over the phone from his home in Manhattan. “There were these elegant systems that weren’t burdened by the messiness and complexities of human life.” 

After Yale and the smiley face—which he never saw a penny from since there were no royalties for emoticons—he went to work for Google in 2004 at a time when they were hiring busloads of Microsoft’s best engineers. “At Google, I got a close-up look at AI and how unpredictable these things could be, and how hard to control,” Auerbach says. “If something went wrong, AI tended to make it harder to figure out why it had gone wrong, and harder to fix without messing something else up.” ChatGPT was still twenty years away; the AI that he worked on at Google was being used for things like its search engine.

Most of us think of AI as something new, but it’s been around for decades in examples such as the Roomba vacuum and Clippy, Microsoft’s much-maligned office assistant. Auerbach writes about AI and the implications of Big Tech in his new book, Meganets: How Digital Forces Beyond Our Control Commandeer Our Daily Lives and Inner Realities. He coined the term “meganet” to describe networks that are increasingly beyond the control of their government or corporate administrators, such as Facebook, Twitter, Google, cryptocurrency networks, even online games. “The problem isn’t AI per se,” he says.

“AI works wonderfully in contexts like voice recognition or playing chess. The problem is when AIs are hooked to these meganets. That’s when the interaction of hundreds of millions of people and extraordinary processing power yields feedback loops that send these systems out of control. For example, Microsoft’s Bing Sydney AI could not have spun out fantasies of releasing nuclear codes and gaining power if it hadn’t been seeded with our very own nightmares of AI taking over.”

His book couldn’t have come at a better time. Last month an open letter arrived like a warning shot, signed by several of tech’s best-known names, including Elon Musk, Stuart Russell, Max Tegmark, Yoshua Bengio, Grady Booch, and Steve Wozniak, calling for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” The letter also warns, “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

What does Auerbach think of their concerns? “One thing the letter tells us,” he says, “is that even the people who work on this technology are starting to get pretty uncomfortable with what it’s capable of doing. And what they’re proposing is basically, ‘Let’s just kick the can and buy ourselves some time to figure it out.’

“The nature of their discomfort varies from person to person,” he added. “Some people are concerned about actual existential risk and releasing the nuclear codes. There are other people who are probably closer to me, who are more concerned about the technology being used to manipulate discourse and further confuse reality beyond what it already is.”

Whatever the problems with AI, Auerbach doesn’t think we can fully stop it. “The difficulty is, what are you actually going to do in the next six months that is going to make things better?” he says, referring to the letter. “I don’t see a lot of their concrete suggestions as being ones that are hugely feasible in that time frame. And they talk about ‘confidence.’ Who the hell knows what confidence is? And how do we know when we’ve gotten there? Here’s the answer: We aren’t going to get there. And we’re going to lie to ourselves that we have.”

I spoke to Auerbach about the real risks of AI and why chatbots seem to like talking about nuclear war.

PS: Tech futurists are torn between those who predict AI will usher in a positive era of abundance, such as Google engineering director Ray Kurzweil, and those who think we’ll inevitably be replaced, per Sun Microsystems co-founder Bill Joy, who believes we’re heading for a robot rebellion. Which side do you fall on?

DA: Both Kurzweil and Joy reflect a tendency to overestimate the technological component and downplay the human component. A lot of what you get out of these machines is what you put in, and these machines are not anywhere close to conscious or sentient. The overly optimistic predictions about “strong AI”—truly sentient autonomous machines—well, you had those predictions in the 1950s, and if you look back it seems preposterous. The current technology is mostly relevant to how we view reality. For example, simulated deep fakes that are indistinguishable from the real thing, such as new Jimi Hendrix albums and new Humphrey Bogart movies, are all coming. 

PS: In your book you write that meganets like Facebook are as immune to prediction as tectonic plates and the weather. It’s unnerving to think Facebook has become as unpredictable as a monsoon.

DA: Zuckerberg may dislike the criticism he gets over Facebook’s privacy violations, disinformation, and hate speech, but the truly unsettling thing is that he lacks the power to fix a lot of those issues. That sheer loss of control is much scarier than anything others might say to him. Even China can’t manage their own systems to suppress dissent to the extent they’d like, and they have an army of censors and moderators observing the entire populace.

PS: One way you recommend meganets fight harmful discourse is to intentionally “pollute” large data stores and AIs “with random garbage data.” 

DA: If you have mechanisms that arrest and slow down these meganets—mechanisms that cause doubt and break up the narrative bunkers and microcultures that meganets tend to produce—that could be effective. Right now, the algorithms are based on engagement and “likes,” so you get more of what you already like. Yes, the meganets are technically giving people what they want, or what they think they want, but sometimes that’s actually a bad idea. As if we didn’t know that sometimes giving people what they want might not be a good thing. 

PS: What’s been your experience with chatbots?

DA: I’ve been doing this long enough that I know where to poke to get it to throw up. I got ChatGPT to claim that five was not a prime number! For ChatGPT to be able to do what it does, and yet claim that five is not a prime number, that’s something very different than anything we would think of as human intelligence. It was babbling. Remember, we’re cueing AI. If we say, “What are your darkest desires?,” it looks up “AI’s darkest desires” and gives us back what we want. And it’s easy to attribute emotions, intentions, and mental states to such things, because nothing else has ever used language at this capacity that hasn’t had emotions, intentions, and mental states. 

PS: Is a chatbot as clever as it seems?

DA: It is incredibly convincing and yet incredibly sclerotic. It doesn’t understand what it’s saying. It’s just sort of manufacturing stuff out of its vast, vast, vast resources. If you ask it to explain itself, that’s where it tends to fall apart—that sort of second order thinking, like, “Why is this funny?” 

PS: Thank God we still have Mel Brooks.

DA: A friend of mine told ChatGPT, “Write me a joke about Ovid’s Metamorphoses in the style of Richard Pryor.” I think it’s pretty good! It was, “Have you ever read Ovid’s Metamorphoses? That’s some wild stuff, man. It’s like a Greek version of Dr. Dolittle, except instead of talking animals, you’ve got gods turning people into trees and rivers and stuff. And then there’s that story about Narcissus falling in love with his own reflection. I mean, we’ve all been there, am I right? But Ovid takes it to a whole new level. And don’t even get me started on that whole Pygmalion statue thing. I mean, talk about a guy with unrealistic expectations. But you know what they say, love is blind. Or in Pygmalion’s case, love is stone deaf and dumb.”

PS: Richard Pryor would have used at least a dozen F-words.

DA: It filters for that. That’s one of the things that they try to blanket ban.

PS: Conservatives are programming “right-wing chatbots.” Given AI’s unpredictability, are politicized chatbots realistic?

DA: You can’t hope to police the data going into an AI on a case by case basis, but if the data is already partitioned, you can be somewhat selective. If you were to dump in only the contents of Fox News, you will definitely shunt the AI in certain directions, though you won’t be able to control whether it sounds more like Tucker Carlson or Sean Hannity. This is what ChatGPT has been doing in shunting their AI toward being as politically anodyne as possible. It defaults toward tolerance of almost anything. You can still defeat it if you’re clever. I was able to get ChatGPT to say that various extremist or racist groups should be tolerated, because it defaults toward tolerance of almost anything. 

PS: The 2013 movie Her predicted people will fall in love with AI robots. Can you see that happening?

DA: In some cases, I think AIs will substitute for human emotional interaction. Because if something can give you the emotional satisfaction you want? It’s hard enough to get it from a person, so if you can get it somewhat more reliably from an AI, and all you have to do is convince yourself that it’s real and forget that it’s not real, it’s going to be pretty appealing.

PS: What would you say to someone who says, “Why shouldn’t I worry about AIs blowing up the earth?” I’m thinking of Stanislav Petrov, the soldier in a Soviet bunker in the 1980s, who likely averted nuclear armageddon by himself when he decided not to act on a computer warning signal that the United States had just launched five intercontinental ballistic missiles. When they asked Petrov how he knew the data was flawed, he said, “I had a funny feeling in my gut.”

DA: I don’t think anyone’s going to be putting an AI in unmoderated control of such a thing anytime soon. If there’s something to worry about, it’s not the AI doing it autonomously, it’s the AI misrepresenting reality in such a way that it convinces a human to do it. As AI and meganets increasingly filter and present reality to us, the chances for that reality to maybe have unintended consequences in a very harmful direction go up.

PS: How would you advise someone to adapt themselves to an AI world?

DA: Be critical, be doubtful, get outside your narrative bubbles, and resist viral attention traps. The more some bit of content demands you engage with it, the better off you are ignoring it and putting forth something more constructive.

Follow David Auerbach on Twitter at @AuerbachKeller. For more science coverage, read David Zweig on the dangers of gain-of-function research here.

To stay ahead of the chatbots, subscribe to The Free Press today:

Subscribe now

The Free Press earns a commission from any purchases made through Bookshop.org links in this article.

Latest