The simmering anger toward autonomous vehicles—and tech writ large—ignited, literally, in San Francisco this month. A mob attacked one of Waymo’s self-driving cars in San Francisco’s Chinatown neighborhood, vandalizing the Jaguar robotaxi. Then someone threw a firework into the passenger seat. The innocent self-driving car was burnt to a crisp.
For some, this was an act of righteous anger, rage against the machine, a cry for help from people fed up with our slide into automation. “These are people who did not ask for self-driving cars, did not vote to let them onto their streets and shared spaces, doing what they can to push back,” wrote Luddite sympathizer Brian Merchant of the Chinatown arsonists.
In their report on the incident, tech publication The Verge noted, “Vandalism and defacement are time-honored parts of the human experience,” adding that “tech companies have been forced to reckon with this inevitability as they deploy their equipment in public with impunity.”
In fact, nothing could be more antihuman than rooting against self-driving cars, which, by the way, are just getting good.
I first got in a self-driving car around 2016 back when Uber piloted its initial program. During my test ride, a human backup driver took control of the wheel every few moments. I concluded that self-driving cars weren’t going to change the world anytime soon. It seemed obvious to me that Uber’s pilot was less about testing a technology that was almost ready for the road and more about persuading prospective investors on how self-driving cars might one day eliminate one of its biggest costs—dangerous human drivers.
But times change. That initial hype has died down and in the past few months, I’ve ridden solo in a number of Alphabet-owned Waymos in San Francisco.
They’re amazing.
The first time I took a Waymo, I’ll admit, it was a little terrifying. You get in and you’re the only person in the car. And you’re not sitting behind the wheel. Then the car starts driving on its own.
Now that I have several Waymo trips under my belt, I’ve come to trust and enjoy them, even more than human-driven rides. It’s a smooth ride. There’s no human driver present to interrupt your phone calls. You can pick the music yourself, and, of course, the science fiction–feeling of being driven by a robot is exhilarating. I tend to sit in the front passenger seat just to get a better view.
I’ve had one hiccup, when a Waymo inexplicably went down a dead-end street and humans at HQ seemed to intervene remotely to get things back on track. But I’ve been in Ubers where human drivers rolled through stop signs and made dangerous last-minute swerves.
There was a time when I believed that self-driving cars should be held to the standard of airplanes. Every mistake needed to be rigorously understood and any human death was unforgivable. But my view has evolved over time as human drivers have continued to kill tens of thousands of people a year. We need a solution that’s meaningfully better than human drivers, yes, but we shouldn’t wait for perfection before we start getting dangerous human drivers off the streets.
Lost in all the fulminating about automation and big-tech tyranny is the fact that self-driving cars are an attempt to solve a very serious problem. Traffic fatalities are a leading cause of death in the United States for anyone between the ages of 1 and 54. About 40,000 people die in car crashes a year in the U.S., with about one-third involving drunk drivers.
There’s a natural, though irrational, human bias toward the status quo. We tend to believe that things are the way they are for a good reason. But of course, technology has drastically improved human lives and human life spans already. Why stop now that more powerful computer chips and sophisticated artificial intelligence models open up new possibilities?
We should build the best world we can, and that includes minimizing traffic deaths by reducing the number of human drivers on the roads. That’s something that the left—who advocate getting human drivers out from behind the wheel of their personal vehicles and onto buses, trains, bikes, scooters, you name it—historically supports. But self-driving cars are getting left off that list.
Sixty-nine percent of Democrats believe the government will not go far enough in regulating driverless vehicles, while Republicans see things the other way. Fifty-nine percent of Republicans believe the government will go too far.
Leaving aside seething hostility toward tech and private capital, and worries over job losses, the most credible objection to self-driving cars from the left is the fear that deploying them means doubling down on roads and sprawl, and undermining support for public transportation projects. But there’s no reason self-driving cars and public transportation need to be at odds. They can fulfill different needs. Autonomous vehicles are being deployed in San Francisco in fleets through ride-hailing programs, reducing the need for personal car ownership. If we can get self-driving cars working, self-driving buses on regular routes should be even easier.
And contrary to the view that driverless cars are being deployed unilaterally by tech billionaires, the people’s representatives—government officials—gave Alphabet-owned Waymo a license to operate. Our roads and motor vehicles are tightly regulated. Single incidents have derailed self-driving car projects, from Uber and more recently, GM-owned Cruise, while human drivers kill tens of thousands a year unimpeded.
As a society, we’ve grown numb to the death and destruction on our roads, but we desperately need an intervention from whoever will deliver it. Anyone rooting against self-driving cars is cheering for tens of thousands of deaths, year after year. We shouldn’t be burning self-driving cars in the streets. We should be celebrating them. We should be urging local governments to beg self-driving car companies to come to our cities. The longer we resist this new technology, the more people die.
Read more from Eric Newcomer, a longtime technology journalist and the co-host of the artificial intelligence conference, the Cerebral Valley AI Summit, on his Substack, Newcomer.
And for more opinion on tech, culture, and more, become a Free Press subscriber today:
1. Luxury belief. The people advocating most loudly for implementing these are the ones that do not live in the places these are tested. You’re welcome to donate your neighborhood and children to the testing curve that goes with implementation of any new technology. After all if your child dies you can console yourself that, statistically, it’s for the greater good.
2. Lack of data. These have never been deployed at scale. Saying they are safer than human drivers (implication: at scale) is like saying a barely tested vaccine is safe and effective. Maybe, maybe not. This claim is just scientifically invalid.
3. Liability matters. Right now at least there is the threat of prosecution for dangerous driving. Which court, exactly, is equipped to comb through billions of lines of code to discern who messed up? As far as I know there’s never been an autonomous system of any kind holding millions of lives all at once in any given moment, and what the incentive structures look like when liability is decoupled from individual actors. And are you sure that amongst tens of thousands of engineers there will never be one horrifically stupid or bad actor?
4. Techno utopians remain shockingly ignorant that technologies never develop as planned. Look at the horror that is too often Facebook. Or smartphones on kids. Technologies’ second, third, fourth order etc effects are inherently unpredictable. The creators are never held accountable.
5. Individuals have autonomy. In contrast, a massive system is different and susceptible to hackers beyond individual control. Generally people don’t get dismembered when their bank account is hacked. How about a hack with hundreds of thousands of drivers all doing 65 mph?
6. I’ve said the solution to this for years is not to make Americans more sedentary by car expansion but restructure cities to be human and child centric. Most Americans live in cities. Many cities in the world have solved this problem already with great urban planning and public transport. Cars are a bad idea for a lot more than road fatalities, in many ways they’ve diminished the beauty and livability of American cities, health, and human relationships.
7. Bad government actors. We’ve seen totalitarian overreach in many areas and it’s even worse in Canada. If governments control social media accounts and cancel bank accounts for political persecution, what exactly does a protest look like when those same corrupt government officials are installed at Waymo just like they were at Twitter?
My point: human freedom isn’t possible when large opaque systems can in one moment derail someone’s intentions or life. There are many other highly successful interventions already in use to reduce traffic deaths that don’t have any of the above risks. Even with a car centric city like Portland they’ve reduced many of the speed limits to 20mph, for example. AI driving is a different animal from past technologies in many dimensions and should therefore be in its own category and not naively equated to the rollouts of past technologies.
It’s fine that the author enjoys chatting on the phone or “unplugging” while being whisked around in a sterile, silent vehicle. I’d rather risk death by having a conversation with the human being who is driving me around. I’ve had some great talks, met some interesting folks, and gotten perspectives that are different from my own.
All this “if it saves one life” business is getting really old.