Bari Weiss hosts a conversation with economist and Free Press columnist Tyler Cowen and Anthropic chief of staff Avital Balwit on the coming revolution.
Excellent conversation on a subject I've been thinking a lot about. Unfortunately, I missed parts of it. Can you send us out a transcript so we can read it at our own pace? Sure would appreciate it.....
This was a welcome article and discussion even just to get more people thinking more about the coming AI devolution.
Yet, we should take the views of this pair--AI devotees--with a grain of salt in the same way we should have been wary of all the cheerleaders who ushered in the internet age. It will change the world, they said then, too; it will make us all better, more productive, healthier, happier, etc. Of course the internet did change the world, and it gave us social media, which has made us lonelier, nastier, and more divided; it clearly has been a detriment to societies and nations.
No doubt, AI will do the same. I'm glad the authors acknowledge that the potential for it to be detrimental is high, and find their answer -- that the world is always changing, and we will adapt to AI as we have to other changes -- as probably right, too. AI is and will be marvelous, but I disagree with their conclusions about how we will respond to it.
It will not enrich our tastes for the sublime; it will deaden it. Rather like everyone wearing jeans as a fashion statement, everyone will have the same tastes, and all of them will be mediocre. And with more idle time, the authors think, we will become more virtuous and appreciate simpler pursuits, like making your own birdhouse, I guess. That is the fallacy of the "noble savage". We will not become more virtuous or morally superior, we will become worse people. Not only that, I'd bet cults and religions will develop around the AI agents -- praise be to Claude -- because that's human nature, as well.
AI will deliver all we think it will, I'm sure, and I'm equally sure we will become the husk of humanity. The Internet experiment points to that conclusion, but this is probably humanity's destiny -- to destroy ourselves through the best of intentions.
All this hype about AI - there's nothing to worry about. If an AI was to get much smarter than us and try to take over, we'd be able to step in immediately and switch it off. We'd easily recognize danger signals like suspicious manipulation of internet content and news. We'd see increasing divisiveness stimulated by that content and news, with resulting saber-rattling by individuals, organizations and nations, pitting humans one against another instead of paying attention to exponentially-growing AI power. There would be increasing inevitable-seeming technology dependence and an about-face in acceptance of new energy generation such as nuclear power to provide for a power-hungry AI entity. Us humans would no doubt recognize how we were being manipulated and controlled for rapacious AI ends. Oh, wait a minute.....hmmm......uh oh....
My understanding is that AI operates on the body of knowledge generated over millennia by humans. (All conveniently recently concentrated on the web). So you can expect it to distill that into great answers to many questions. But... how does that ever advance over time ? My understanding is that if you feed AI output into the body of knowledge used to train other AIs, then the entire scheme degenerates. How does NEW knowledge get generated ? By independently thinking people, as was always the case.
You['re not missing anything. "AI" is just a TV screen that displays information conversationally. You can call up a tour of the Grand Canyon or any other spot on Earth on your TV screen, but the TV doesn't have any comprehension of what it's showing. So-called AI is the same. It is just words without meaning.
“Artificial Intelligence” is a misleading term. It should be “simulated intelligence”, i.e., the output of a computer program that doesn’t understand what it’s doing.
If you ask a librarian to get you references on a subject, if she can’t find any, she won’t invent plausible looking but fictitious ones (unless she’s mentally ill).
Tyler Cowen’s example of AI travel advice reminds of how people erroneously conclude that dowsing “works”, because they find water where it points. They don’t realize there’s just as much water where it doesn’t point.
There’s nothing wrong with a program that runs an online search in the background and then digests the results for you, but you should be aware that’s what is actually happening.
Based on personal experience, Avital's suggestion that young people should study what they're legitimately enthused about (presumably instead of for some narrower utilitarian purpose) is a good one for at least some personalities. I'm 76 now, but when I arrived at university in the late 1960s I had no idea what I wanted to study: all I knew was that I didn't want to work in an office. But I realized also, at exactly the right moment, that what I'd always liked reading were novels of ideas, and that I could get at the ideas more directly in philosophy and intellectual history.
So I loaded up on philosophy courses—logic, epistemology, metaphysics, ethics, etc.—and got an M.A. in that discipline. Later, while doing freelance writing and research projects that allowed me to manage my own time, I picked up a second graduate degree in information science. I've never regretted these educational choices. All I'd ever really wanted to do was keep on reading and learning; and as luck would have it, this self-indulgence eventually led to what became my full-time career. Back in the 1980s, before most people even knew what online searching was, Canada's largest reference library was looking for an online searcher: and since online searching had been a component of the information science degree, I got interviewed and hired.
My career was very satisfying; not only did I have great colleagues, instead of ending up in an office I worked in the architectural equivalent of a cathedral, surrounded by books. Best of all, I was able to put my knowledge to use helping people; and as an added bonus, I learned a great deal from the public I served. The questions they asked opened up new areas of inquiry for me as well.
Research suggests that when we make small decisions we do what you'd expect: perform the equivalent of a cost-benefit analysis and try to make the most rational choice. When it comes to larger life decisions, though, we're frequently more guided by intuition. This is at least in part because we're typically not in possession of the information we'd need in order to do a proper cost-benefit analysis. In my own case, it was only years after having made the decision to study philosophy that I could say to myself, “Thank God I made the right choice, the one that was going to work out perfectly for me, even though I was in no position to know why it would at the time.”
So I can endorse Avital's advice without reservation: follow your instincts, read what you like, and get the broadest general education you can.
Why the obsession with podcasting? Old fashioned professional Journalism is the reason I became a subscriber. Podcasts and being presented with recycled articles is the reason I likely will not renew.
I guess this means the end of The Free Press.
Excellent conversation on a subject I've been thinking a lot about. Unfortunately, I missed parts of it. Can you send us out a transcript so we can read it at our own pace? Sure would appreciate it.....
This was a welcome article and discussion even just to get more people thinking more about the coming AI devolution.
Yet, we should take the views of this pair--AI devotees--with a grain of salt in the same way we should have been wary of all the cheerleaders who ushered in the internet age. It will change the world, they said then, too; it will make us all better, more productive, healthier, happier, etc. Of course the internet did change the world, and it gave us social media, which has made us lonelier, nastier, and more divided; it clearly has been a detriment to societies and nations.
No doubt, AI will do the same. I'm glad the authors acknowledge that the potential for it to be detrimental is high, and find their answer -- that the world is always changing, and we will adapt to AI as we have to other changes -- as probably right, too. AI is and will be marvelous, but I disagree with their conclusions about how we will respond to it.
It will not enrich our tastes for the sublime; it will deaden it. Rather like everyone wearing jeans as a fashion statement, everyone will have the same tastes, and all of them will be mediocre. And with more idle time, the authors think, we will become more virtuous and appreciate simpler pursuits, like making your own birdhouse, I guess. That is the fallacy of the "noble savage". We will not become more virtuous or morally superior, we will become worse people. Not only that, I'd bet cults and religions will develop around the AI agents -- praise be to Claude -- because that's human nature, as well.
AI will deliver all we think it will, I'm sure, and I'm equally sure we will become the husk of humanity. The Internet experiment points to that conclusion, but this is probably humanity's destiny -- to destroy ourselves through the best of intentions.
All this hype about AI - there's nothing to worry about. If an AI was to get much smarter than us and try to take over, we'd be able to step in immediately and switch it off. We'd easily recognize danger signals like suspicious manipulation of internet content and news. We'd see increasing divisiveness stimulated by that content and news, with resulting saber-rattling by individuals, organizations and nations, pitting humans one against another instead of paying attention to exponentially-growing AI power. There would be increasing inevitable-seeming technology dependence and an about-face in acceptance of new energy generation such as nuclear power to provide for a power-hungry AI entity. Us humans would no doubt recognize how we were being manipulated and controlled for rapacious AI ends. Oh, wait a minute.....hmmm......uh oh....
Good discussion. Do more of these with other people in the field and using AI's in their work.
So who's going to pay those tradesman once the Middle Class has been gutted.
Seems to me there will be greater inequality and less satisfaction by those who are on the short end of this. This is a negative result for all.
I read so much faster than I can listen. Time is still precious. Transcripts, please.
My understanding is that AI operates on the body of knowledge generated over millennia by humans. (All conveniently recently concentrated on the web). So you can expect it to distill that into great answers to many questions. But... how does that ever advance over time ? My understanding is that if you feed AI output into the body of knowledge used to train other AIs, then the entire scheme degenerates. How does NEW knowledge get generated ? By independently thinking people, as was always the case.
Or am I just missing a point ?
You['re not missing anything. "AI" is just a TV screen that displays information conversationally. You can call up a tour of the Grand Canyon or any other spot on Earth on your TV screen, but the TV doesn't have any comprehension of what it's showing. So-called AI is the same. It is just words without meaning.
“Artificial Intelligence” is a misleading term. It should be “simulated intelligence”, i.e., the output of a computer program that doesn’t understand what it’s doing.
If you ask a librarian to get you references on a subject, if she can’t find any, she won’t invent plausible looking but fictitious ones (unless she’s mentally ill).
Tyler Cowen’s example of AI travel advice reminds of how people erroneously conclude that dowsing “works”, because they find water where it points. They don’t realize there’s just as much water where it doesn’t point.
There’s nothing wrong with a program that runs an online search in the background and then digests the results for you, but you should be aware that’s what is actually happening.
Really, publish transcripts please! Some people like to listen....some like to watch....but a lot of us like to read.
Based on personal experience, Avital's suggestion that young people should study what they're legitimately enthused about (presumably instead of for some narrower utilitarian purpose) is a good one for at least some personalities. I'm 76 now, but when I arrived at university in the late 1960s I had no idea what I wanted to study: all I knew was that I didn't want to work in an office. But I realized also, at exactly the right moment, that what I'd always liked reading were novels of ideas, and that I could get at the ideas more directly in philosophy and intellectual history.
So I loaded up on philosophy courses—logic, epistemology, metaphysics, ethics, etc.—and got an M.A. in that discipline. Later, while doing freelance writing and research projects that allowed me to manage my own time, I picked up a second graduate degree in information science. I've never regretted these educational choices. All I'd ever really wanted to do was keep on reading and learning; and as luck would have it, this self-indulgence eventually led to what became my full-time career. Back in the 1980s, before most people even knew what online searching was, Canada's largest reference library was looking for an online searcher: and since online searching had been a component of the information science degree, I got interviewed and hired.
My career was very satisfying; not only did I have great colleagues, instead of ending up in an office I worked in the architectural equivalent of a cathedral, surrounded by books. Best of all, I was able to put my knowledge to use helping people; and as an added bonus, I learned a great deal from the public I served. The questions they asked opened up new areas of inquiry for me as well.
Research suggests that when we make small decisions we do what you'd expect: perform the equivalent of a cost-benefit analysis and try to make the most rational choice. When it comes to larger life decisions, though, we're frequently more guided by intuition. This is at least in part because we're typically not in possession of the information we'd need in order to do a proper cost-benefit analysis. In my own case, it was only years after having made the decision to study philosophy that I could say to myself, “Thank God I made the right choice, the one that was going to work out perfectly for me, even though I was in no position to know why it would at the time.”
So I can endorse Avital's advice without reservation: follow your instincts, read what you like, and get the broadest general education you can.
Ditto. A transcript would be nice.
Why the obsession with podcasting? Old fashioned professional Journalism is the reason I became a subscriber. Podcasts and being presented with recycled articles is the reason I likely will not renew.
It's just a different delivery system of the same content. Kind of blows my mind that this is a problem for you.
Some people prefer to listen, especially on their commutes.
I hope you will publish a transcript. Listening to a conversation takes a lot of time.
Can we watch a replay of this?
Reading the media today, I'm not so sure that the revolution is not already here. AI can be taught to be biased, and it doesn't take cocktail breaks.