This is the most important essay we have run so far on the artificial intelligence revolution. I’m excited for you to read it.
Given its importance, tomorrow I’ll be hosting a conversation about it with its authors, Tyler Cowen and Avital Balwit, at 4 p.m. ET. This conversation, like all our livestreams, is for paying subscribers only. (Sign up here to join our community.)
And click here to mark Tuesday’s conversation in your calendar.
— Bari
Are we helping create the tools of our own obsolescence?
If that sounds like a question only a depressive or a stoner would ask, let us assure you: We are neither. We are early AI adopters.
We stand at the threshold of perhaps the most profound identity crisis humanity has ever faced. As AI systems increasingly match or exceed our cognitive abilities, we’re witnessing the twilight of human intellectual supremacy—a position we’ve held unchallenged for our entire existence. This transformation won’t arrive in some distant future; it’s unfolding now, reshaping not just our economy but our very understanding of what it means to be human beings.
We are not doomers; quite the opposite. One of us, Tyler, is a heavy user of this technology, and the other, Avital, is working at Anthropic (the company that makes Claude) to usher it into the world.
Both of us have an intense conviction that this technology can usher in an age of human flourishing the likes of which we have never seen before. But we are equally convinced that progress will usher in a crisis about what it is to be human at all.
Our children and grandchildren will face a profound challenge: how to live meaningful lives in a world where they are no longer the smartest and most capable entities in it. To put it another way, they will have to figure out how to prevent AI from demoralizing them. But it is not just our descendants who will face the issue, it is increasingly obvious that we do, too.
OpenAI CEO Sam Altman asserted in a recent talk that GPT-5 will be smarter than all of us. Anthropic CEO Dario Amodei described the powerful AI systems to come as “a country of geniuses in a data center.” These are not radical predictions. They are nearly here.
In 2019, GPT-2 could barely count to five or string together coherent sentences. By 2023, GPT-4 was outperforming nearly 90 percent of human test-takers on medical licensing exams and the bar exam. In 2024, Claude 3.5 Sonnet was answering questions about complex scientific diagrams and charts with over 94 percent accuracy. Leading researchers at frontier AI companies increasingly believe we’ll achieve AI systems that can match or exceed human cognitive capabilities across virtually all domains before 2030.
Much of the commentary and punditry about AI has focused on things like the future of paralegals or consultants. But these are minor topics that overlook the most fundamental, existential question we will soon face: What does it mean to be human in an age of superintelligent machines? What exactly is special about our minds? What is there left for our species to do?
The purpose of this essay is to talk about what is coming and how to protect ourselves from the demoralization of such marvels.