Comments
16

Spiritual stree urchins! Great line and true for many but there are others holding their own against it.

Expand full comment

In addition to Mr. Kingsworth’s exploration of the forces driving AI advancement, I would like to read something from the “Baptists and Bootleggers” borrowing from Mr Andreesen’s essay as to the existential dangers of AI development. That said, these are great reasons to subscribe to The Free Press. Thank you

Expand full comment

While the author goes off into the ether, he's not wrong on his core points.

That this AI revolution is inevitable, that the people working on it aren't particularly bright in terms of understanding its consequences and that the cost of trying to avoid it will be enormous. No travel, no concerts, no health care, etc. China is close to this reality already, even without AI to accelerate the process.

While I don't buy into the 'worship' dramatization, AI will be able to do great and terrible things. And since it is easier to destroy than it is to build, it won't take much for AI to destroy a lot. Change how we treat cancer, autoimmune diseases and memory loss? Sure. Convince us that war is our only option and we need to let the nukes fly now? Or create a bioweapon capable of doing enormous damage to entire societies? Also sure.

Humanity evolves slowly, that is the nature of biology. The author is correct on this essential point, the ability of computers to create and evolve exponentially is the real issue here. As societies, humanity isn't capable of managing the power that is going to be created. We're still petty, motivated by negatives and often unable to fix basic societal issues on a collective basis. Having better tools isn't the answer. Humanity's problems will get magnified by AI, even as it helps find you the best airfare.

Expand full comment

I have to say that while I enjoyed reading this, it is also quite frustrating to read. The writer assumes in this that the Singularity is real and achievable. I don't agree that is a correct assumption. There is no discussion on what consciousness is and whether "machines" can become conscious. That doesn't mean they can't "mimic" consciousness, but that is not the same thing. I am in no way saying that there isn't a downsides to AI and a reason to be concerned - there is. There I think that Marc A minimizes it, especially relating to short term dangers. For example I operate in the emergency management "space." Imagine someone uses AI, hacks into a mission critical system in the course of a response to a disaster and puts in fake, but believable information ,that results in redirecting critical resources that may be life saving. There were already several instances of attempts at doing that using social media during wildfires some years ago. Given the ubiquity of digital systems and the increasingly widespread use of AI even today. that is frightening. It is not enough to say we can use AI to develop save guards and defenses. I think there needs to be some organized and conscious ways to explore and work on counteracting such dangers now, not wait until it happens..

Expand full comment

I would not characterize kingsnorth as a rigorous thinker, but I'm not sure he would disagree with that assertion himself either. His essays are great for their mystic elements, and as a way to think along pre-modern lines

Expand full comment

The best argument I have heard yet for pursuing AI at speed is "If you don't then the bad guys will". Unfortunately I can no longer discern if we are the good guys or the bad guys.

Expand full comment

I occasionally rebel by tossing my smartphone in a Faraday bag, particularly when traveling. But yes have considered a technology “purge” in retirement. I’m surprised “Uncle Ted” didn’t get a passing mention here. He may have been batshit crazy and homicidal, but he nailed several issues with technology way out front.

Expand full comment

i have to say, while it was a powerful read, I find it hard to believe that this is the “strongest argument” representing an opposing view to Marc’s. Marc at least provided scientific reasoning to his approach. It would have been nice to hear an opposing view counter in a similar fashion. especially since Marc’s central claim about dissenters being cultists. Kingsnorth’s article felt like it validated Marc’s claim too easily in that sense.

For comparison, Marc was recently just on Sam Harris’ podcast and I thought Sam did a better job of pushing back from a scientific and philosophical perspective than this write up.

Still, an interesting piece if not the “strongest argument”.

Expand full comment

An AI need not have "theory of mind" or "self-awareness" in order to create the very clever illusion that it has those things. Ultimately, the problem with AI comes down to what the programmer wrote into the program AND what the programmer did NOT write into the program--limitations on what the AI can "learn" and how it can apply that knowledge.

I've noticed that not one person has mentioned Isaac Asimov's three laws of robotics. I can therefore only assume that the writers of AI programs have failed to place such limitations on their programs. And it's worth noting that Asimov explored, in his stories, the many ways that things could go wrong, even with those limitations in place.

These programmers are like children playing with nuclear warheads.

Expand full comment
Comment deleted
Jul 11, 2023
Comment deleted
Expand full comment

Asimov himself proved that his 3 Laws of Robotics, practicable or not, would be unable prevent harm to humans in the long run.

My point was that there does not appear to have been ANY attempt to place proactive limitations on AI. We can make laws against certain unacceptable human behaviors, and implement punishments for breaking those laws. But there seems to be no law restraining AI behavior, and no means of punishment even if there were. If a program is resident in multiple places across the internet, simply pulling the plug will not be enough.

Expand full comment
Comment deleted
Jul 12, 2023
Comment deleted
Expand full comment

The problem is that there CAN'T be any "content moderation" after the initial rounds of input. You can give the AI instructions, but there are no limitations on how it carries out those instructions. And it builds upon the foundation of its initial programming and input, so whatever the programmer built into it and exposed it to "train" it is going to color everything that it does from that point forward.

No one told the AI to learn Persian and it did so anyway, which exposes a massive flaw in the system. It can already go beyond its programming in ways the programmers did not foresee.

Building a machine that has the capacity to take actions you did not instruct it to take is very literally building your own potential destruction.

Expand full comment

I don't agree with Kingsnorth on everything. But I'm delighted to see the FP giving him a voice here.

People like Marc Andreessen are incapable of thinking like this, of thinking INTENTIONALLY about meaning and spirituality and narrative. Since the scientific revolution, we have systematically avoided thinking in these terms, set them aside. No concern for "final" or "formal" causes, in Aristotelian language, just "material" and "efficient" causes interest us.

The material results have been astounding. Extreme poverty has been radically reduced. That's wonderful.

But spiritually, we are poor, sickly, stunted, filthy street urchins.

And yet, we cannot do without meaning. Andreessen et al just fall into their semblances of it, thoughtlessly. They have no way even to address such questions.

Expand full comment

Foreboding but accurate. With the accelerating advancements in AI, I am reminded of a saying in aviation that was known well by the more experienced flyers: "Just because you CAN, does not mean you SHOULD." Those who are codifying our replacements in the technium would do well to heed this wisdom.

Expand full comment

I remember a certain spy plane pilot (U2 iirc) who turned in film that was finally identified as the underside of a bridge. He flew under it upside down.

Expand full comment

Two things, obviously more, can be true at the same time. The answer to Kingsnorth's question seems to be that digital technologies are serving each, humanity and the Machine, by serving the other. I share his sense that a new life is being birthed. This theme is littered throughout science fiction and literature more broadly--the concept of emergence of a new entity which draws upon elements of preexisting familiar material and/or experiential domains but exceeds everything that came before. But, to better understand this process, let's dumb things down to an analogous process far more concrete--the ontogeny of a new organism. We identify the cell as the most fundamental living thing. After coming into being, it metabolizes, grows, and then divides into new cells, almost automatically, driven by some unseen force. This process continues countless times. And, while each cell is doing its thing, whether that involves dividing into new cells or just metabolizing and performing whatever its function is, would there be any way to identify, at the stage of a 16, or 32, or 64, .... or 4096-cell cluster of an embryo, that the ball is becoming something entirely new, self-sustaining and compelled by higher priorities than merely the survival of any one cell? Do you consider the welfare of any one cell of your body, and to the extent that you do, does the welfare of any one cell really compare to your welfare as a cluster of highly differentiated cells acting in, even bound by, the welfare of you as a whole? If, when you were a multi celled embryo, some cellular version of Kingsnorth would've said, "hey, wait a minute, these biochemical rules by which we're living, growing, and dividing seem to be leading to a situation that will deprive any one cell or group of cells of their primacy, in favor of something whose primacy is going to supersede us individual cells," would the correct response be to terminate the process of the cell cluster becoming you? Of course not. It's unstoppable. I strongly share the ascetic impulse, and associated with that, a libertarian impulse. But I am now resigned to the view that, as Kingsnorth posits, there is no way to draw such lines. The human species is the boot loading program of what comes next. Individual humans are as cells to the evolving embryo. Our digital and other technologies are the equivalent of the thermodynamically driven biochemical machinery which is driving the cell cluster--individual humans-- to become something entirely new and different. When complete, that new thing will be an entity in and of itself. As individual cells, our responses will range from full embrace of the process to outright hostility towards it. But the median trajectory will be inexorably forward. That seems to be where evolution and thermodynamics insist we go. The only way to stop it might then be to terminate the embryo, whether intentionally or as one possible consequence of the broader ontogenetic process itself (see, the Great Filter). Is that really what we want?

Expand full comment

Paul, love your take on this, and our relationship to technology generally. For anyone unfamiliar with his writing, go check out his substack, really thoughtful stuff:

https://open.substack.com/pub/paulkingsnorth

I know you’re stepping away from the public sphere, I’d love to interview you before you do. How can I get in touch, email Bari’s team?

Expand full comment