In 2015, Chris Urmson, then leader of Google’s self-driving car project, said that one of his goals in developing a fully self-driving car was to ensure that his 11-year-old son would never need a driver’s license.
The subtext was that in five years, when Urmson’s son turned sixteen, self-driving cars would be so ubiquitous, and the technology would be so superior to human driving, that his teenage son would have no need or desire to learn to drive for himself .
Well, it’s 2024 and Urmson’s son is now 20 years old. Are there any bets on whether he’ll get that driver’s license?
One of the hallmarks of the race to develop autonomous vehicles is the wildly optimistic predictions about when they will be ready for everyday use. The landscape is littered with missed deadlines.
In 2015, Baidu senior vice president Wang Jing said the tech company would sell self-driving cars to Chinese customers by 2020. In 2016, then-Lyft president John Zimmer claimed that “a majority” of trips occur on its rideshares. network would be present in fully self-driving cars “within five years”. That same year, Business insider said there will be ten million autonomous vehicles on the road by 2020.
GM said it would mass-produce self-driving cars without steering wheels or pedals by 2019. Ford, slightly more conservatively, predicted it would do the same in 2021. And in a perfect encapsulation of the autonomy hype of the mid-2010s, Intel forecast a figure of $7 trillion in 2017. auto industry – more than double what the global auto industry is doing now – around autonomy by 2050.
Of course, no one has been more optimistic than Tesla CEO Elon Musk, who has elevated making false predictions about the readiness of autonomous vehicles to an art form.
Of course, no one has been more optimistic than Tesla CEO Elon Musk, who has elevated making false predictions about the readiness of autonomous vehicles to an art form. “By the middle of next year we will have well over a million Tesla cars on the road with fully self-driving hardware,” Musk said in 2019. Tesla’s Full Self-Driving (FSD) feature would be so reliable that the driver could do it . go to sleep.” Teslas with the company’s FSD software are not autonomous, and drivers would be wise not to sleep in their cars.
Sure, there are some self-driving cars on the road today. They are located in San Francisco, Phoenix, Los Angeles, Hamburg and Beijing, among others. They are controlled by some of the largest, most capitalized companies in the world. You can even drive some of them.
But they’re stuck. Not stuck in the sense of a Tesla Cybertruck getting stuck in less than an inch of snow. But confined within geofenced service areas, held back by their own technological shortcomings, opposed by unions and advocates of more reliable modes of transportation, and restricted from driving on certain roads or in certain weather conditions.
“For too long, the autonomous vehicle industry – especially those developing and testing robotaxis – has gotten away with selling a vision of the future that they should know full well will never become a reality,” said Sam Anthony, co-founder and CTO. from Perceptive Automata, a now defunct AV company, wrote in its 2022 newsletter.
We assumed that the robots would be able to drive as freely as we do. After all, we have built a world where we humans can drive anytime, anywhere. So why have we gotten it so wrong?
Before we explore why the industry has collectively sniffed out the rollout of self-driving cars, it’s instructive to look at why these predictions were made in the first place. Why put up these goalposts if they never really mattered?
The answer, of course, is money. By promising that driverless cars were “around the corner,” about to take over our roads, companies could rake in hundreds of billions of dollars to fund their experiments.
The amount of money flowing into the autonomous vehicle space also had the knock-on effect of convincing regulators to take a lax approach when it comes to self-driving cars. AV boosters warned that too many regulations would “stifle innovation” and jeopardize future profits, whether in safety or job creation.
And it turns out that regulators were very receptive to these arguments. The federal government – whether under Obama, Trump or Biden – has done very little to stand in the way of companies testing their technology on public roads. A bill in Congress that would accelerate the rollout of cars with steering wheels and pedals has stalled over disagreements over liability, but you wouldn’t know it from looking at these fundraising efforts.
Some states, like California, have done their best to establish some sort of regulatory playbook. But most were eager to attract companies, believing that self-driving cars were the future. And who wants to stand in the way of the future?
For almost a decade, AV operators were able to raise virtually unlimited amounts of money.
For almost a decade, AV operators were able to raise virtually unlimited amounts of money. They did this through normal fundraising channels, or by tying up with major technology and automotive companies. Cruise Automation was acquired by General Motors. Ford invested $1 billion in Argo AI. Google, always slightly ahead of the rest, released its self-driving car project under the name Waymo. Amazon bought Zoox. Hyundai joined Motional. Some estimate that more than $160 billion has flowed into the industry over the past twelve years.
And after the pandemic, the companies unable to deal with big automakers or tech giants found a new way to raise money quickly: SPACs. Traditional IPOs were slow, and special purpose acquisition companies were fast, so dozens of mobility-focused startups went public by merging with these so-called “blank check” companies to gain access to more money faster.
And despite a number of setbacks, such as crashes, lawsuits and investigations, the money kept pouring in. It wasn’t until 2021, when the industry raked in $12.5 billion led by GM’s Cruise which raised a whopping $2.75 billion, that funding for AV companies peaked. .
Predictions about the imminent arrival of safe, reliable self-driving technology helped accelerate the flow of money. And once those predictions didn’t come true, the money started to dry up.
Why did the predictions fail? While the technology was incredibly effective in getting us most of the way there, it stumbled as it got closer to the finish line.
In the AV world this is called the ‘long tail of 9s’. It’s the idea that you can get a vehicle that’s 99.9 percent as good as a human driver, but in reality you’ll never get 100 percent. And that’s because of edge cases, these unpredictable events that confuse even human drivers.
When training an AI program to drive, you can predict a lot about what to expect, but you can’t predict everything. And when those edge cases eventually occur, the car can make mistakes – sometimes with tragic consequences.
When training an AI program to drive, you can predict a lot about what to expect, but you can’t predict everything.
Take the example of Cruise. Last October, a woman was hit by a human driver while crossing the street in San Francisco. The impact sent her flying into the path of a driverless cruise vehicle, which immediately braked after also hitting her. The Cruise vehicle then attempted to stop on the side of the road, unaware that the woman was still pinned underneath the vehicle, further injuring her.
One of the first things Cruise did in response to the incident was to recall all 950 vehicles it had on the road in the US. This took the form of an over-the-air software update to the collision detection subsystem so that the vehicle remains stationary during certain crash incidents, rather than stopping at the side of the road. Cruise encountered an edge case and quickly issued a correction for it.
But how many edge cases are still in the shadows? And how many more people will be injured – or even killed – before these cars are seen as more reliable?
Waymo is at the forefront of trying to convince the public and regulators that its vehicles are as safe, if not safer, than humans. It has released a number of studies and statistical analyzes in recent years showing that its vehicles have fewer accidents, cause less damage and improve overall road safety.
But for every Waymo, there’s an Elon Musk, whose misleading predictions about the imminent readiness of self-driving cars muddy the waters for anyone who knows that reality is much further away than previously thought. Waymo is also accepting legal liability for accidents involving its vehicles – something Tesla has so far refused to do.
But Waymo doesn’t shape public perception of self-driving cars; Tesla does. Broken promises and failed predictions are fueling growing skepticism about self-driving cars among the public, who as the years pass are increasingly turned off by the idea of handing control of their vehicles to a robot.
Without passengers there is no business. But without safe, reliable technology, there is no future for autonomous vehicles.