FOR FREE PEOPLE

FOR FREE PEOPLE

Sam Altman. (Drew Angerer via Getty Images)

Is AI the End of the World? Or the Dawn of a New One?

A conversation with Sam Altman, the man behind ChatGPT, about the risks and responsibilities of the artificial intelligence revolution.

If you asked most Americans a year ago if they’d heard of OpenAI, few outside of Silicon Valley would have recognized the name. Now OpenAI’s artificial intelligence chatbot, ChatGPT, is used daily by over 100 million users. Some of those people—including the economist Tyler Cowen—report using it more often than Google. ChatGPT has become the fastest-growing app in history.

The app can write essays and code. It can ace the bar exam, write poems and song lyrics, and summarize emails. It can give advice, information, and diagnose an illness using a set of blood results, all in a matter of seconds. And all of the responses it generates are eerily similar to those of an actual human being. 

For many people who have spent time with this technology, it feels like we’re on the brink of something world-changing. They say that ChatGPT—and the emergent AI revolution more broadly—will be the most critical and rapid societal transformation in human history.

If that sounds like hyperbole, don’t take it from me.

Google’s CEO Sundar Pichai said the impact of AI will be more profound than the discovery of fire. Computer scientist and Coursera co-founder Andrew Ng said AI is the new electricity. Recently, The Atlantic ran a story comparing AI to nuclear weapons.

The smartest technologists in the world are insisting that this technology is going to be a world-changer. The question is: for good or ill?

One of the pioneers of AI, Eliezer Yudkowsky, claims that if AI continues on the trajectory it’s on, it will destroy life on Earth as we know it. “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter,” he recently wrote.

You can’t come up with more of a doomsday scenario than that. But Yudkowsky’s not the only one with serious concerns. Thousands of experts and ethicists—people like Elon Musk and Steve Wozniak—say they are so concerned about this technology that in March, in an open letter, they called for an immediate pause on training any AI systems more powerful than the current version of ChatGPT. 

So: which is it? Is AI the end of the world? Or is it the dawn of a new one?

To answer that question, I reached out to Sam Altman.

Altman is the co-founder and CEO of OpenAI, which makes him one of the most powerful people in Silicon Valley—and if you believe the hype, perhaps the world.

In the most recent episode of Honestly I ask him: Is the technology that powers ChatGPT going to fundamentally transform life on Earth as we know it? If so, how? How will AI affect our jobs, our understanding of intelligence, our relationships, and our basic humanity? And are the people in charge of this powerful technology, people like himself, ready for the responsibility?

You can listen to the whole conversation here. Below, you’ll find a slightly edited version.

—BW

BW: In just a few years, your company, OpenAI, has gone from being a small nonprofit that few outside of Silicon Valley paid much attention to, to having a multibillion-dollar arm of the company with a product so powerful that some people spend more time on it than they do on Google. Other people are writing op-eds warning that the company and technology that you’re overseeing has the potential to destroy humanity as we know it. For those who are new to this conversation, what happened at OpenAI that led to this massive explosion in only just a few short months? 

SA: First of all, we are still a nonprofit; we have a subsidiary capped-profit. We realized that we just needed way more capital than we could have raised as a nonprofit, given the compute power that these models needed to be trained. But the reason that we have that unique structure around safety and sharing of benefits—it’s only more important now than it used to be. The last seven years of research has really paid off. It took a long time and a lot of work to figure out how we were going to develop artificial intelligence, AI, and we tried a lot of things. Many of them came together, some of them turned out to be dead ends, and finally we got to a system that was over a bar of utility. Some may argue whether the product is or isn’t intelligent, but most people would agree that it has utility. After we developed that technology, we still had to develop a new user interface. Another thing that I have learned is that making a simple user interface that fits the shape of the new technology is important, and usually neglected. We had the technology for some time, but it took us a little while to find out how to make it really easy to chat with. We were very focused on this idea of a language interface, so we wanted to get there. We then released that to the public, and it’s been very gratifying to see that people have found a great deal of value in using it to learn things, to do their jobs better, and to be more creative.

BW: ChatGPT is the fastest-growing app in the history of the internet. In the first five days, it got a million users. Then over the course of two months, after it launched in January, it amassed a hundred million users. Right from the beginning, it was doing amazing things. It was all anyone could talk about. It could take an AP test, it could draft emails, it could write essays. . . .  Most recently, before I went on Bill Maher, I knew we were going to talk about this subject, so I asked ChatGPT for a Bill Maher monologue, and it churned it out in seconds. And it sounded a whole lot like Bill Maher! He was not thrilled to hear that. Yet, you have said that you were embarrassed when ChatGPT-3 and 3.5, the first iterations of the product, were released. Why is that?

SA: Well, Paul Graham, who ran Y Combinator before me and is a legend among Silicon Valley, once said to me, “If you don’t launch a Version One that you’re a little embarrassed about, then you waited too long to launch.” There are all of these things in ChatGPT that still don’t work that well, and we make it better and better every week. 

BW: What are you using ChatGPT for right now? 

SA: Well, this is the busiest I’ve ever been in my life, so at the moment, I am mostly using it to help process inbound information. Summarizing emails, summarizing Slack threads. I take a very long email that someone writes and it gives me a three-bullet-point summary. That may not be its coolest use case, but that’s how I’m personally using it right now to help my day-to-day.

BW: What is its coolest use case? 

SA: Well, I get these heartwarming emails from people every day telling me about how they use it to learn new things and how much it has changed their lives. I hear from people in all different areas of the world. It takes very little effort to learn how to use it and it can become someone’s personal tutor for any topic they wish. A lot of programmers rely on it for different parts of their workflow. That’s kind of my world, so we hear about that a lot. There was a Twitter thread recently about someone who says they saved their dog’s life because they input a blood test and symptoms into GPT-4. 

BW: I’m curious where you see ChatGPT going. You use the example of summarizing long-winded emails or summarizing Slack. These are menial tasks, like ordering your groceries, sending emails, making payments. But then there are different tasks—tasks that are more foundational to what it is to be a human being. For example, things that emulate human thinking. Someone recently released an hour-long episode of The Joe Rogan Experience with you as the guest. Yet it wasn’t actually Joe Rogan. And it wasn’t actually you. It was entirely generated using AI language models. So, is the purpose of AI to do chores and mindless emails, or is it for the creation of new conversations, new art, new information? Because those seem like very different goals with very different human and moral repercussions.

SA: I think it’ll be up to individuals and society as a whole to see how they want to use this technology. The technology is clearly capable of all of those things, and it’s clearly providing value to people in very different ways. We also don’t know perfectly yet how it’s going to evolve, where we’ll hit roadblocks, what things will be easier than we think, what things will be much, much harder. What I hope is that this becomes an integral part of our workflow in many different tasks. It will help us create. It will help us do science. It will help us run companies. It will help us learn more in school and later on in life. I think if we change out the word AI for software, which I always like doing, so instead say, “Is software going to help us create better,” or “Is software going to help us do menial tasks better, or is it going to help us do science better?” And the answer, of course, is all of those things. If we understand AI as just really advanced software, which I think is the right way to do it, then the answers may be a little less mysterious. 

BW: Sam, in a recent interview, when you were asked about the best- and worst-case scenarios for AI, you said this of the best-case: “I think the best is so unbelievably good that it’s hard for me to imagine.” I’d love for you to imagine, what is the unbelievable good that you believe this technology has the potential to do?

SA: I mean, we can take any sort of trope that we want here. What if we’re able to cure every disease? That would be a huge victory on its own. What if every person on Earth can have a better education than any person on Earth gets today? That would be pretty good. What if every person a hundred years from now is a hundred times richer in the subjective sense? Maybe they’re happier, healthier, have more material possessions, more ability to live the good life in the way it’s assigned to them than people are today. I think all of these things are realistically possible. 

BW: So, what’s the other side of it? You said the worst-case scenario is “lights out for all of us.” I’m sure a lot of people have quoted that line back to you. What did you mean by it? 

SA: I understand why people would be more comfortable if I would only talk about the great future here, and I do think that’s what we're going to get. I think this can be managed. I also think the more that we talk about the potential downsides, the more that we as a society work together on how we want this to go, it’s much more likely that we’re going to be in the upside case. But if we pretend like there is not a pretty serious misuse case here and just say, “Full steam ahead! It’s all great! Don’t worry about anything!”—I just don’t think that’s the right way to get to the good outcome. When we were developing nuclear technology, we didn’t just say, “Hey, this is so great, we can power the world! Oh yeah, don’t worry about that bomb thing. It’s never going to happen.” Instead, the world really grappled with that, and I think we’ve gotten to a surprisingly good place. 

BW: There’s a lot of people who are sounding the alarm bells on what’s happening in the world of AI. Recently, several thousand leading tech figures and AI experts, including Elon Musk, who co-founded OpenAI but left in 2018; Apple co-founder Steve Wozniak; and Andrew Yang, who you backed in the last election, signed this open letter that called for a minimum six-month pause on the training of AI systems more powerful than ChatGPT-4. They wrote, “Contemporary AI systems are now becoming human competitive at general tasks, and we must ask ourselves, should we let machines flood our information channels with propaganda and untruth?”

SA: We already have Twitter for that. 

BW: [laughs] “Should we develop non-human minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk the loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” 

That’s what they wrote. And I think there are several ways to interpret this letter. One is that this is a cynical move by people who want to get in on the competition, and so the smart thing to do is to tell the guy at the head of the pack to pause. The other cynical way to read it is that by creating fear around this technology, it only makes investments further flood the market. I also see a pure version, which is they really think this technology is dangerous and that it needs to be slowed down. How did you understand the motivations behind that letter? Cynical or pure of heart? 

SA: You know, I’m not in those people’s heads, but I always give the benefit of the doubt. Particularly in this case, I think it is easy to understand where the anxiety is coming from. I disagree with almost all of the mechanics of the letter, including the whole idea of trying to govern by open letter, but I agree with the spirit. Some of the stories I hear about new companies trying to catch up with OpenAI and their discussions around cutting corners on safety I find quite concerning. I think we need an evolving set of safety standards for these models where, before a company starts a training run, before a company releases a new model, there are evaluations for the safety issues we’re concerned about. There should be an external auditing process that happens. Whatever we agree on, as a society, as a set of rules to ensure safe development of this new technology, let’s get those in place. For example, airplanes have a robust system for this. But what’s important is that airplanes are safe, not that Boeing doesn’t develop their next airplane for six months or six years or whatever. 

BW: There were some people who felt the letter didn’t go far enough. Eliezer Yudkowsky, one of the founders of the field, or at least he identifies himself that way, refused to sign the letter because he said that it actually understated the case. Here are a few words from an essay he wrote in the wake of the letter: “Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’ . . . If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter. There’s no proposed plan for how we could do any such thing and survive. OpenAI’s openly declared intention is to make some future AI do our AI alignment homework. Just hearing that this is the plan ought to be enough to get any sensible person to panic. The other leading AI lab, DeepMind, has no plan at all.” How do you understand that letter? Why are some of the smartest minds in tech this hyperbolic about this technology? 

SA: Look, I like Eliezer. I’m grateful he exists. He’s like a little bit of a prophet of doom. Before this, it was that the nanobots were going to kill us all and the only way to stop it was to invent AI. And that’s fine. People are allowed to update their thinking, and I think that actually should be rewarded. But if you’re convinced that the world is always about to end and, in my opinion, you’re not close enough to the details of what’s happening with the technology, I think it’s hard to know what to do. So, I think Eliezer is super smart. But the field of AI in general has been one with a lot of surprises. I think this is the case for almost any major scientific or technological program in history. Things don’t work out as cleanly and obviously as the theory would suggest. You have to confront reality, you have to work with the systems, you have to work with the shape of the technology or the science, which may not be what you think it should be theoretically. You deal with reality as it comes, and then you figure out what to do about that. Many people never thought we would be able to coexist with a system as intelligent as GPT-4, and yet here we are. So I think the answer is we do need to move with great caution and continue to emphasize figuring out how to build safer and safer systems and have an increasing threshold for safety guarantees as these systems become more powerful. But sitting in a vacuum and talking about the problem in theory has not worked. 

BW: You’ve compared the ambitions of OpenAI to the ambitions of the Manhattan Project. And I wonder how you grapple with the kind of ethical dilemmas that the people that invented the bomb grappled with. One of the things that comes to mind is the pause letter. Many people are asking you to pause research. Meanwhile China, which is already using AI to surveil its citizens, has said that they want to become the world leader in AI by 2030. They’re not pausing. So, let’s discuss your comparison to the Manhattan Project. What were the ethical guardrails and dilemmas that they grappled with that you feel are relevant to the advent of AI? 

SA: I think the development of artificial general intelligence, or AGI, should be a government project, not a private company project, in the spirit of something like the Manhattan Project. I really do believe that. But given that I don’t think our government is going to do a competent job of that anytime soon, it is far better for us to go do that than just wait for the Chinese government to go do it. So, I think that’s what I mean by the comparison. I also agree with the point you were making, which is that we face a lot of very complex issues at the intersection of discovery of new science and geopolitical, or deep societal implications, that I imagine the team working on the Manhattan Project felt as well. Sometimes it feels like we spend as much time debating the issues as we do actually working on the technology, and that’s a good thing. It’s a great thing. And I bet it was similar with people working on the Manhattan Project. 

BW: As you definitely know, there has been a lot of discussion over the past few months about biases in tech broadly. But the difference between something like Twitter and something like AI, some would argue, is that we can at least understand the biases on Twitter’s platform. But when it comes to a technology like AI, whose own creators don’t even fully understand how it works, the bias is not as easy to uncover. It’s not as transparent. How do we know how to find the biases if we don’t know even how to look for them?

SA: I mentioned earlier that the first version of GPT embarrassed me. One of the things I was embarrassed about was that I do think the first version did not do an adequate job of representing, say, the median person on Earth. But the new versions are much better. In fact, one thing that I appreciate is that most of the loudest critics of the initial version have gone out of their way to say openly that we at OpenAI listened to the critiques and that the newer version is much, much better. Biases are unavoidable, but we really looked at our whole training stack to see the different places that biases can be found, to find out where it is, how to measure it, how to design evals for it, and where we need to give different instructions to human labelers. We’ve made a lot of progress there. I think it has gone noticed and people have appreciated it. That said, I really believe that no two people on Earth will ever agree that one AI system is fully unbiased. The path here is to set a) very broad limits of what the behavior of one of these systems should ever be, and agree on some things that need to be fully avoided—and that needs to come from society, ideally globally, and then b) within that, give each individual user a lot of ability to say, “Here is the way I want this software to behave for me. Here are the things I believe.”

BW: When Elon Musk is saying that OpenAI is training the software to lie, is there any truth to that? 

SA: I don’t even know what he means by that. You have to ask him. 

BW: I recently read that you have no stake in OpenAI. Tell me about your decision to not have any stake in a company that maybe stands to be the most profitable company of all time. 

SA: I mean, I’ve been super fortunate and done super well. I have plenty of money. This is the most exciting thing I can imagine working on. I think it’s really important to the world, and this is how I want to spend my time. I found that I personally like having very clear motivations and incentives. I do think we’re going to have to make some very nontraditional decisions as a company. 

BW: You’re super rich and so you can make the decision not to do that, but do you think this technology is so powerful—and that the possibility of making so much money is so strong—that it’s sort of an ethical imperative for anyone helming any of these companies to be financially monastic about it? You’ve alluded to other companies that are already cutting corners in order to “beat you” in the race. Short of having democratically elected heads of AI companies, how do you regulate this? What are the guardrails put in place to prevent people from being corrupted or incentivized in ways that are dangerous? 

SA: Actually, I do think democratically elected heads of AI companies or major AGI efforts is probably a good idea. I think that’s pretty reasonable. 

BW: Well, that’s probably not going to happen, given that the people that are in charge of this country don’t even seem to know what Substack is. How would that actually work?

SA: I don’t know. This is all still speculative. I have been thinking about things in this direction much more recently, but what if all the users of OpenAI got to elect the CEO? It’s not perfect, you know, because it impacts people who don’t use it. We’re still probably too small to have a representative. We’re still way too small to have anything near a representative sample. But it’s better than other things I could imagine. 

BW: Let’s talk a little bit about regulation. You’ve said that you can imagine a global governance structure, kind of like Galactic Federation, that would oversee decisions about the future of AI. 

SA: What I would like more than a global galactic whatever is similar to something we talked about. Certainly something like the IAEA [International Atomic Energy Agency]. You know, something that has real international power by treaty and that gets to inspect the labs, set regulations, and make sure we have a cohesive global strategy. That’d be a great start. 

BW: What about the American government? What do you think our government should be doing right now to regulate this technology? 

SA: The one thing that I would like to see happen today, because I think it’s impossible to screw up and they should just do it, is government insight. I’d like to see the government have the ability to audit, whatever, training runs models produced above a certain threshold of compute. Above a certain capability level would be even better. If we could just start there, then I think the government would begin to learn more about what to do, and it would be a great first step. 

BW: My pushback to that would be: do you trust the people currently in government even to understand the nature of this technology, let alone regulate it? 

SA: I mean, I think you should trust the government more than me. At least you get to vote them out. 

BW: Given that you are the person running OpenAI, what are the things that you do to prevent corruption of power?

SA: Like me personally being corrupted by power in the company? 

BW: Yeah, I mean, listen. . . you’ve been a very powerful person in your industry for many years. It seems to me that over the past six months or so, you’ve become arguably one of the most powerful people overseeing a technology that a lot of really smart people are warning, at best, will completely revolutionize the world and, at worst, will completely swallow it or, as you said, “lights out for everybody.” I guess I’m asking a spiritual or psychological question. How do you deal with the burdens of that? How do you know that you’re making the right choices and decisions? 

SA: I don’t have shares at all. I serve at the pleasure of the board. I do this the old-fashioned way where the board can just decide to replace the CEO whenever they want. I do think whoever is in charge of leading efforts should be democratically elected. Somehow, to me, that seems super reasonable and difficult to argue, but it’s not like I have dictatorial power over OpenAI, nor would I want it. I think that’s really important.

BW: That’s not what I’m suggesting. I’m suggesting that the power over the future is emanating out of this particular group of people. And you are one of the stars in that group. And you’ve become a brighter and brighter star. How has that changed you and how you look at it? 

SA: It definitely feels surreal. I heard a story—and it has always stuck with me—about how this former astronaut, decades after going to the moon, would stand in his backyard and look up at the moon and think it was so beautiful. He would then randomly remember, “Oh, fuck, decades ago I went up there and walked around on that thing. That’s so crazy.” I sort of hope that’s how I feel about OpenAI. Decades from now, when it’s on its fourteenth democratically elected president or whatever and I’m living this wonderful life in this fantastic future and marveling at how great it is, I want to remember that I used to run that thing. But to add to that, I think you are probably overstating the degree of power I have in the world as an individual, and I probably under-perceive it myself. I still just kind of go about my normal life with all of the normal human drama and wonderful experiences. And I’m aware of the elevated stakes. I take it super seriously. It is somehow very strange and then subjectively not that different. But I feel the weight of it. 

Click below to hear the whole episode, including Sam’s views on God and UFOs:

CORRECTION: A previous version of this story referred to ChatGPT as an app that “scours the internet for information.” That language has been removed. In fact, one of ChatGPT’s distinguishing features is that it doesn’t have real-time access to the internet (as opposed to other AI apps). Its knowledge is based on a large text corpus its been trained on.  —TFP

And if you haven’t yet—what are you waiting for? Become a subscriber to The Free Press today:

Subscribe now

our Comments

Use common sense here: disagree, debate, but don't be a .

the fp logo
comment bg

Welcome to The FP Community!

Our comments are an editorial product for our readers to have smart, thoughtful conversations and debates — the sort we need more of in America today. The sort of debate we love.   

We have standards in our comments section just as we do in our journalism. If you’re being a jerk, we might delete that one. And if you’re being a jerk for a long time, we might remove you from the comments section. 

Common Sense was our original name, so please use some when posting. Here are some guidelines:

  • We have a simple rule for all Free Press staff: act online the way you act in real life. We think that’s a good rule for everyone.
  • We drop an occasional F-bomb ourselves, but try to keep your profanities in check. We’re proud to have Free Press readers of every age, and we want to model good behavior for them. (Hello to Intern Julia!)
  • Speaking of obscenities, don’t hurl them at each other. Harassment, threats, and derogatory comments that derail productive conversation are a hard no.
  • Criticizing and wrestling with what you read here is great. Our rule of thumb is that smart people debate ideas, dumb people debate identity. So keep it classy. 
  • Don’t spam, solicit, or advertise here. Submit your recommendations to tips@thefp.com if you really think our audience needs to hear about it.
Close Guidelines

Latest