82 Comments

Aaron Mulgrew, a Forcepoint security researcher, recently shared how he created datamining malware exclusively on ChatGPT:

https://www.digitaltrends.com/computing/chatgpt-created-malware/

Expand full comment

Bain Capital, one of the funders of ChatGPT, tied to China's surveillance camera push

https://www.nbcnews.com/id/wbna46756531

Expand full comment

MIT Technology Review Article, March 2023

The inside story of how ChatGPT was built from the people who made it

https://www.technologyreview.com/2023/03/03/1069311/inside-story-oral-history-how-chatgpt-built-openai/

The authors of ChatGPT, for those of you who don't know, work at a company in San Francisco called OpenAI.

So . . . as I read this article, my BS detector went on . . .

"What I came away with was the sense that OpenAI is still bemused by the success of its research preview, but has grabbed the opportunity to push this technology forward,"

"OpenAI has also signed a multibillion-dollar deal with Microsoft and announced an alliance with Bain, a global management consulting firm, which plans to use OpenAI’s generative AI models in marketing campaigns for its clients, including Coca-Cola. Outside OpenAI, the buzz about ChatGPT has set off yet another gold rush around large language models, with companies and investors worldwide getting into the action."

"You want it to be helpful, you want it to be truthful, you want it to be—you know—nontoxic. "

"It should also clarify that it’s an AI system. It should not assume an identity that it doesn’t have, it shouldn’t claim to have abilities that it doesn’t possess, and when a user asks it to do tasks that it’s not supposed to do, it has to write a refusal message."

"You can’t wait until your system is perfect to release it."

"Our biggest concern was around factuality, because the model likes to fabricate things."

"we know the model is still very biased. And yes, ChatGPT is very good at refusing bad requests, but it’s also quite easy to write prompts that make it not refuse what we wanted it to refuse."

"A lot of things that we found were jailbreaks, which is definitely a problem we need to fix. "

So, in other words, ChatGPT is a chatbot that has a lot of problems and is not yet ready to go to market, but because Microsoft and Bain put up billions to use ChatGPT to sell it to corporations to help craft their advertising campaigns, the problems with ChatGPT don't matter.

Expand full comment

Here's what I got:

---

Hello ChatGPT

Hello! How can I assist you today?

Write me a joke about Ovid’s Metamorphoses in the style of Richard Pryor.

Sure, here's a joke about Ovid's Metamorphoses in the style of Richard Pryor:

"Have you heard about Ovid's Metamorphoses? Man, that book is wild! I mean, these gods and goddesses are turning people into animals left and right. It's like an ancient version of Freaky Friday, but with way more fur and feathers. And you know what's crazy? Some of these transformations are just for petty revenge. Like, imagine getting turned into a tree because some god got mad at you for being too good-looking. That's just messed up, man. I guess the moral of the story is, be careful who you piss off, or you might end up barking up the wrong tree. Literally."

Expand full comment

Yeah, according to the folks at OpenAI, ChatGPT is supposed to deliver non-toxic messages, but veiled threats such as "be careful who you piss off, or you might end up barking up the wrong tree" are OK.

Expand full comment

Well, it is a joke, not to be taken too seriously.

Expand full comment

"Be critical, be doubtful, get outside your narrative bubbles..." Good advice for all of us - and having just read the piece by Yeonmi Park, maybe - perhaps particularly - by our elite.

Expand full comment

Perhaps these software monsters do create unpredictable feedback loops. But isn't that true of all technologies? Have human beings ever been able to foresee all of the ramifications of a new technology? For that matter, of a new idea?

Expand full comment

The word technology comes from the Greek word texni, which means a craft, art, or expertise. People have been creating technologies for thousands of years.

Technologies are variable in the side effects they cause. Some have few side effects and are easy to control (audio headphones, sailboats). Others are fundamentally bad (weapons of mass destruction). Other technologies have mostly good effects but are hard to control (prescription medicines). Some technologies have both good and bad potential effects.

AI is a computer science technique that can be applied to different problems and systems. When applied to language translation, we can tell pretty quickly if it is working or not. If we don't like the translation, we can decide not to use it. The ramifications of a crappy AI translator aren't that serious. Even with ChatGPT, people are quickly figuring out that it doesn't work very well for complex or philosophical asks. The ramifications could be bad for someone who doesn't know they are dealing with ChatGPT. Where things could get really bad with AI is with deep fakes or fully self driving cars. Luckily, with the Waymo and Cruise Cars in San Francisco, most people know about them. They are easy to see. There was one recent situation were a self driving car drove into the middle of an emergency vehicle situation. The fire crews were freaking out because the car was about to drive over their fire hoses. Because they knew that by smashing the window, the self driving car would stop, that's what they did.

In short, its complicated. But the worst AI situations happen where humans trying to interface with AI systems don't know they are dealing with an AI, or can't respond (the case with fast moving cars controlled by an AI that does something really dumb), or where the human can't disengage from the AI system.

Expand full comment

Very realistic comments about AI from Auerbach.

There are positive and achievable advances that can be made from AI such as computer augmented interpretation of x-rays, language translation (both written and spoken), augmented chemistry for the exploration of new drugs, and driver assistance (not fully automated driving).

From the beginning, I was highly skeptical about fully automated driving and said so at a conference in about 2017 (GOMAC). Time has only proven my skepticism to be well founded.

Still remember when someone on the board of Nvidia, in about 2002, told me that my job (analog circuit design engineering) would be replaced in a few years by "AI". Artificial intelligence has been incorporated into analog design tools, but even after twenty years, we are nowhere near replacing analog circuit designers with "AI".

To some degree, the World Economic Forum has used "AI" as cover to lay people off with the excuse that "AI" has replaced a lot of jobs. This mostly hasn't happened. Instead, a lot of jobs have been offshored to lower wage countries under the guise of "AI".

I agree with Auterbach that ChatGPT can easily be backed into a corner. I've already seen people who don't know much about AI get ChatGPT to say obviously untrue things. Seems to be quite easy to do this.

Like a lot of other big tech products, AI needs regulating. The effect of "likes" on social media sites (even substack) deserves closer attention. One thing I've been aware of for quite some time is that it is very difficult online to take the middle ground on any issue. Invariably, one ends up between camps, with people afraid to be caught in no man's land between two warring factions. A lot of important information online is suppressed because it doesn't fit into a given narrative that people are familiar with.

It's time to impose heavy duty regulation on deep fakes and some of the other obviously criminal content online.

I agree with Auerbach that something needs to be done about the tendency of social media to create information silos.

Expand full comment

That's an excellent point -- one can already see the fallacies that ChatGPT can "replace" ... whatever ... legal research, medical research, etc. Under attentive and informed human supervision, it's a powerful and useful tool. On its own, there's zero quality control and no "common sense" checks.

Expand full comment

Our robots, now in

The Garden of Eden, soon

Feast on its apples.

Expand full comment

The extended examination of "What are we doing.here?" has always submitted to and obeyed the greed and lust for power of the few who are truly addicted. I fear these tools in the hands of the security state, which is our real government. They and their corporate bedfellows are not going to pause for even a minute, much less 6 months, merely because thoughtful scientists are worried about where all this might go.

Expand full comment

I've always been a listen-to-my-gut kind of a gal. Recently I had the opportunity to talk to a super intelligent young woman who works as a coder for a new A.I. start-up company. She looked at me as if she didn't understand what I meant when I mentioned that humans, unlike A.I.'s, are able to listen to their intuitions and be guided by our internal "gut feelings." The story of Stanislav Petrov demonstrates how important it is for humans to continue listening to their "guts." I look forward to sending a copy of this article to my young coder friend! : )

Expand full comment

Good article. More on these issues, please!

Expand full comment

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Really? And when will that be? Who will decide when that is? Give me a break. Only a totalitarian North Korea-type society could even attempt to control something like this. Free people in free societies will find a way.

AI isn't the problem. Ignorant, uneducated, lazy and gullible people are the problem. Monsters from the Id.

Expand full comment
founding

DA's last comment nails it - and should be posted on every wall:

"Be critical, be doubtful, get outside your narrative bubbles, and resist viral attention traps."

Find a news source that exposes you to things you'd rather not read - that disagrees, politely with your basic ideas - and forces you to reexamine them.

Better yet, if you are lucky enough to meet a person who disagrees with you, but can have a civil conversation and explore the differences. Take the time to build a relationship with that person - it will be worth the effort.

Perhaps the real positive of AI will be a reawakening of critical thought? We are all capable of it - but it requires exercise.

Dear Bari, Thanks to you and your staff. Once again you hit the nail on the head.

Expand full comment

She first identified the nail to be hit -- that's the real value of Free Press :)

Expand full comment

Great article, thank you

Expand full comment

Listen to Undivided Attention Podcast - The AI Dilemma episode if you want to get better insights as to how fast AI is teaching itself and to understand how every utopia scenario has an equivalent dystopia scenario. When AI experts advise that they didn't know AI had a capability for months afterwards, when they underpredict by a factor of 4 how long it will take AI to develop a capability, when 50% of AI experts believe that there is a greater than 10% chance that AI will cause human extinction then it's time to pause AI deployment and research.

Expand full comment
founding

AI is the tech world's version of gain of function research.

Expand full comment

It could be, if it's allowed to control things, especially a large network of things. Right now, its control abilities are limited to specific circumstances, like driver assist in your car. Let's keep it that way.

Expand full comment

Worked at DARPA MTO as a consultant between 2016 and 2018. Another amazing application of machine learning is this one:

https://www.technologyreview.com/2023/02/15/1067904/ai-automation-drug-development/

Expand full comment