ChatGPT has been around for about two seconds and it’s already changing how people work, how they write, how they research, how they cheat on tests, how they profess their love, and what they make for dinner.
We didn’t have to talk about theoretical “use cases,” as people still do with cryptocurrency. ChatGPT, which 100 million people use every day, showed us its uses right away.
That is really exciting. And also really unnerving.
The economist Tyler Cowen, who recently told me he uses the app more than he does Google, wrote the best essay I’ve read so far on the subject. His argument is that—as with fire, and the printing press, and the car, and the internet—this technology cannot and will not be stopped.
The question, in other words, isn’t one of yes or no. Instead it’s: How will we learn to harness it? What will we do with it? And what will it do to us?
As with those previous technologies, there are many people warning loudly about AI’s power and potential disuse. One of them is Geoffrey Hinton, an early computer science pioneer whose work on neural networks underpin AI, and who quit his job at Google this week. He told The New York Times: “I don’t think they should scale this up more until they have understood whether they can control it,” citing fears that the average internet user might soon “not be able to know what is true anymore.”
Is Hinton right? Or will he be remembered alongside the people who told us that the invention of the typewriter would ruin romance?
I don’t know. None of us do. Which is what makes this such a rich and complicated subject.
Curious what you all think after you read Tyler’s essay, which was originally published on his must-read blog, Marginal Revolution. See you in the comments.
In several of my books and many of my talks, I take great care to spell out just how special recent times have been, for most Americans at least. For my entire life, and a bit more, there have been two essential features of the basic landscape:
1. American hegemony over much of the world, and relative physical safety for Americans.
2. An absence of truly radical technological change. Unless you are very old, old enough to have taken in some of WWII, or were drafted into Korea or Vietnam, probably those features describe your entire life as well.
In other words, virtually all of us have been living in a bubble “outside of history.”
Now, circa 2023, at least one of those assumptions is going to unravel, namely #2. Artificial intelligence represents a truly major, transformational technological advance. Biomedicine might too, but for this post I’ll stick to the AI topic, as I wish to consider existential risk.
Number one might unravel soon as well, depending how Ukraine and Taiwan fare. It is fair to say we don’t know; nonetheless, #1 also is under increasing strain.
Hardly anyone you know, including yourself, is prepared to live in actual “moving” history. It will panic many of us, disorient the rest of us, and cause great upheavals in our fortunes, both good and bad. In my view, the good will considerably outweigh the bad (at least from losing #2, not #1), but I do understand that the absolute quantity of the bad disruptions will be high.
I am reminded of the advent of the printing press, after Gutenberg. Of course the press brought an immense amount of good, enabling the scientific and industrial revolutions, among many other benefits. But it also created writings by Lenin and Hitler, and Mao’s Red Book. It is a moot point whether you can “blame” those on the printing press; nonetheless, the press brought (in combination with some other innovations) a remarkable amount of true, moving history. How about the Wars of Religion and the bloody seventeenth century to boot? Still, if you were redoing world history, you would take the printing press in a heartbeat. Who needs poverty, squalor, and recurrences of Ghenghis Khan–like figures?
But since we are not used to living in moving history, and indeed most of us are psychologically unable to truly imagine living in moving history, all these new AI developments pose a great conundrum. We don’t know how to respond psychologically, or for that matter, substantively. And just about all of the responses I am seeing I interpret as “copes,” whether from the optimists, the pessimists, or the extreme pessimists (e.g., Eliezer). No matter how positive or negative the overall calculus of cost and benefit, AI is very likely to overturn most of our applecarts, most of all for the so-called chattering classes.
The reality is that no one at the beginning of the printing press had any real idea of the changes it would bring. No one at the beginning of the fossil fuel era had much of an idea of the changes it would bring. No one is good at predicting the longer-term or even medium-term outcomes of these radical technological changes (we can do the short-term, albeit imperfectly). No one. Not you, not Eliezer, not Sam Altman, and not your next-door neighbor.
How well did people predict the final impacts of the printing press? How well did people predict the final impacts of fire? We even have an expression: “playing with fire.” Yet it is, on net, a good thing we proceeded with the deployment of fire (“Fire? You can’t do that! Everything will burn! You can kill people with fire! All of them! What if someone yells ‘fire’ in a crowded theater!?”).
So when people predict a high degree of existential risk from AGI (artificial general intelligence, which is when the AI gets so good, it’s indistinguishable from human intelligence), I don’t actually think “arguing back” on their chosen terms is the correct response. Radical agnosticism is the correct response, where all specific scenarios are pretty unlikely. Nonetheless, I am still for people doing constructive work on the problem of alignment, just as we do with all other technologies, to improve them. I have even funded some of this work through Emergent Ventures.
I am a bit distressed each time I read an account of a person “arguing himself” or “arguing herself” into existential risk from AI being a major concern. No one can foresee those futures! Once you keep up the arguing, you also are talking yourself into an illusion of predictability. Since it is easier to destroy than create, once you start considering the future in a tabula rasa way, the longer you talk about it, the more pessimistic you will become. It will be harder and harder to see how everything hangs together, whereas the argument that destruction is imminent is easy by comparison. The case for destruction is so much more readily articulable—“boom!” Yet at some point your inner Hayekian (Popperian?) has to take over and pull you away from those concerns. (Especially when you hear a nine-part argument based upon eight new conceptual categories that were first discussed on LessWrong eleven years ago.) Existential risk from AI is indeed a distant possibility, just like every other future you might be trying to imagine. All the possibilities are distant; I cannot stress that enough. The mere fact that AGI risk can be put on a par with those other, also distant possibilities, simply should not impress you very much.
Given this radical uncertainty, you still might ask whether we should halt or slow down AI advances. “Would you step into a plane if you had radical uncertainty as to whether it could land safely?” I hear some of you saying.
I would put it this way. Our previous stasis, as represented by my #1 and #2, is going to end anyway. We are going to face that radical uncertainty anyway. And probably pretty soon. So there is no “ongoing stasis” option on the table.
I find this reframing helps me come to terms with current AI developments. The question is no longer “go ahead?” but rather “given that we are going ahead with something (if only chaos) and leaving the stasis anyway, do we at least get something for our trouble?” And believe me, if we do nothing, yes, we will reenter living history and quite possibly get nothing in return for our trouble.
With AI, do we get positives? Absolutely; there can be immense benefits from making intelligence more freely available. It also can help us deal with other existential risks. Importantly, AI offers the potential promise of extending American hegemony just a bit more, a factor of critical importance, as Americans are right now the AI leaders. And should we wait, and get a “more Chinese” version of the alignment problem? I just don’t see the case for that, and no, I really don’t think any international cooperation options are on the table. We can’t even resurrect the WTO or make the UN work or stop the Ukraine war.
Besides, what kind of civilization is it that turns away from the challenge of dealing with more. . . intelligence? That has not the self-confidence to confidently confront a big dose of more intelligence? Dare I wonder if such societies might not perish under their current watch, with or without AI? Do you really want to press the button, giving us that kind of American civilization?
So we should take the plunge. If someone is obsessively arguing about the details of AI technology today, or arguments they read on a blog like LessWrong from eleven years ago, they won’t see this. Don’t be suckered into taking their bait. The longer a historical perspective you take, the more obvious this point will be. We should take the plunge. We already have taken the plunge. We designed/tolerated our decentralized society so we could take the plunge.
See you all on the other side.
Tyler Cowen is an economist and professor at George Mason University.
For another view on this subject, read Freddie deBoer’s post on why the AI hype is overblown.