320 Comments

ChatGPT is a filter that gets between you and information that you would normally have processed with your own critical thinking faculties to reach a conclusion. As such, it does the heavy lifting for you. The danger is that the interpretative algorithms in ChatGPT can inject any bias favoured by its creators or their sponsors. Combined with our own critical thinking faculties atrophying from lack of use, ChatGPT is a tool that can create unprecedented levels of conformity and uniformity that collectivist tyrants could only have dreamed of.

Expand full comment

For a purportedly brilliant man he has WAY to much faith in Government.

First our elections are fraudulent and rigged

Second politicians come and go but the Administrative state is forever and unaccountable

Expand full comment

Is it just me, or did Sam Altman manage to say nothing in this interview?

Expand full comment

I lived in the Bay Area for five years or so, and worked in Silicon Valley, and in my personal view not only are most of the people running most of the companies moral defectives, they are practically "on the spectrum". They are not emotionally intelligent. They are not people people. They do not have kind loving hearts, the capacity for meaningful empathy, or for moral logic. They like machines, and like money and like the things that money buys. Period. And status. Any Ah Shucks routine is just to try and add to their street cred. The egos are huge.

I am powerless to influence any of this in any way. I have tended to shy away from articles like this for that reason. The future is a crap shoot in the hands of reckless people. It may work out. It may not. Life ends for all of us, and that is just how it is. The Singularity, as indefinite consciousness embedded in a machine, will never happen. The physics of consciousness won't support it.

And in important respects, as someone in a bar once pointed out to me, we are already cyborgs. We carry Information Machines in our pockets that allow instantaneous communication anywhere on a global network.. The guy who led the team that developed the iPhone claimed it was the next step in human evolution. I very much doubt this. I think we are getting dumber, more manipulable, less coherent as individuals, and less connected as authentic communities.

Still, this is an astonishing time to be alive, if you can manage the fear. Fear is always just a very small distance away from fascination and engagement.

Expand full comment

In my lifetime I've seen television make us dumber and fatter, the internet make us lazier, social media made us more hostile. I don't care much for what AI promises, it will not make us superhumans.

Expand full comment

BW: You really should interview the opposing views, i.e. Eliezer, Elon, to dive a bit deeper. Sam has interesting way to first agreeing and then strongly disagreeing. Re. bias - it’s not just training on all views available on the web, it’s also built on algorithms to present the views of who is in control.

Expand full comment
May 1, 2023·edited May 1, 2023

The comments so far baffle me. People seem pretty flippant about the potential downsides.

I don't see how this makes civilization better in the near or long term. And the upside seems pretty minimal.

As someone with a young daughter, it worries me what kind of world is being created for her.

This seems the height of hubris.

Expand full comment

A great time for my favorite Jurassic Park quote: “They were so preoccupied with whether they could, they didn’t stop to think if they should.”

We don’t need this. Please let us humans be.

Expand full comment

"I understand why people would be more comfortable if I would only talk about the great future here, and I do think that’s what we're going to get. "

This is my primary issue with Sam Altman. He is 100% incapable of seeing any downsides to AI/AGI and dismisses them very quickly. He also assumes he is blessed with a gnosis to guide humanity to a single end point. I am not saying he is a bad person, just unbelievably naive and short sighted.

Expand full comment

What a technologically blithe, deficiently human take he offered. As if his software was all about helping people do their homework, as he uses it to boil long emails down to three bullet points--and wishing for government control of what could become the most dystopian thought control system imaginable.

Expand full comment
founding
May 1, 2023·edited May 1, 2023

Is there a ChatGPT bullet point version of this TFP?

Asking for a friend....

Expand full comment

I think I've learned enough about the native language of Silicon Valley to know when someone is blowing smoke. At about 33:07, Sam is blowing smoke. "Mm.. I actually find this like a very useful exercise." That's Silicon Valley speak for "don't worry I'm a lot smarter than everyone else."

Bari... Don't laugh when one egotistical tech lord takes a petty shot at another. It isn't funny. "We have Twitter for that..." Twitter is just one iteration of what happens when you hyper connect human beings and overload them with information. At no point in this interview did he answer, in great detail, how his product isn't a hyper accelerated internet nor how he is protecting human beings from the ills that will bring. You should have asked him if OpenAI has solved what is inherently wrong with everything else on the internet. You let him get away with a petty shot at Musk and Twitter instead. It was nothing more than whataboutism on his part.

Expand full comment

SA "I also think the more that we talk about the potential downsides, the more that we as a society work together on how we want this to go, it’s much more likely that we’re going to be in the upside case"

The problem with this statement is that Big Tech has been controlling the narrative i.e. Twitter with Covid. Society will be "controlled" to work together with outliers silenced. The technoloy concentrates power to a select few. The modern Hitler.

Expand full comment

Great piece.

I'm curious, though. Why do Elon Musk et al. assume that AI on its own has the potential to destroy humanity? Isn't that a bit anthropomorphic? The assumption there seems to be that "competitiveness" and "domination" are intrinsic features of "intelligence." But is that really true?

I mean, I could _definitely_ see one group of humans using AI as a _tool_ to destroy other groups of humans—and in the process, inadvertently destroying them all. But projecting a kind of uber-mammalian territorial instinct onto the tool itself? Seems a bit self-referential on the part of the humans. 😀

I will also note that ChatGPT—which I use quite a bit for research when I'm working—not infrequently gives wrong answers. 😀 As of yet, it's not an entirely useful research tool. Its occasional wrong answers are somewhat mystifying in that there doesn't seem to be any particular _thread_ connecting those wrong answers, some discrete area of information (for example) it hasn't boned up on yet. ChatGPT's wrong answers are very random.

Expand full comment

I am old enough to remember the movie “The Forbin Project” from back in the seventies, where both the US and Sovit Union placed their entire nuclear defense systems under AI’s. The two

AI’s communicated, merged, and established dominance over the entire world, enforced by threat of missile launches if the AI’s new world order was not followed.

I am also old enough to have seen the evil use of technology unleashed on the world by the likes of Hitler and Stalin and Mao, as well as the regimes of the Soviet Union, China, Iran, and North Korea. These realities demonstrate humankind’s inability to live and let live. Would their management of AI be any different?

Expand full comment

I'm sorry but the government can't even manage to fill a pothole on my street, and they are supposed to be able to regulate THIS? It doesn't take a lot of critical thinking skills (while we all still have them) to see that isn't happening..

Expand full comment