Uber's self-driving Volvo was involved in a fatal accident in Arizona. (Photo: Uber)

The fatal uber crash: a critical moment for driverless cars

Safety and Ethics, Technology and Business

Angelo Rychel

Angelo Rychel



For the first time, a fully autonomous car has killed a pedestrian. While the police has already exonerated Uber, the crash still marks a crucial point for driverless mobility. The struggle for public trust has taken a huge hit.

It’s the moment many people in the car industry have feared: for the first time, a fully autonomous car is reported to have killed a person. The self-driving Uber struck and killed a 49-year-old woman crossing a street in Tempe, Arizona, on Sunday night. Uber immediately halted their testing of driverless vehicles in Arizona, San Francisco, Toronto, and Pittsburgh. “We’re thinking of the victim’s family as we work with local law enforcement to understand what happened,” Uber CEO Dara Khosrowshahi said in a tweet.


The fatal crash has instantly spurred intense media scrutiny worldwide. It could seriously alter the further development and roll-out of the technology because it raises many urgent questions. Among them: why did technology fail to prevent the crash? Is testing in urban scenarios still too risky for this moment in time? Is Arizona’s lax regulatory approach still justifiable? And has the multi-billion-dollar race towards autonomous driving led carmakers to rush?



The crash occurred around 10 pm in Tempe near a busy intersection with multiple lanes in every direction. A 49-year old woman was pushing her bicycle across the street when she was hit by a self-driving Uber vehicle. The Volvo XC90 SUV was travelling in autonomous mode at a speed of 38 miles per hour and showed no signs of slowing down, the police said according to The Verge. A safety driver behind the wheel wasn’t able to retake control in time to prevent the imminent accident. Tempe’s Police chief Sylvia Moir told the San Francisco Chronicle: “The driver said it was like a flash, the person walked out in front of them.” The crash happened roughly 100 yards away from a crosswalk.

Following a preliminary investigation of the video footage taken by the Volvo’s on-board cameras, Moir said that the ride-hailing service was likely not at fault for the accident. From watching the videos, “it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway.”


[UPDATE: The police released the dash-cam video of the immediate moments before the crash on Wednesday. As can be seen, the backup driver wasn't giving his full attention to the road. This could raise the question whether Moir's assessment will hold true – and why the car's sensors failed to detect the pedestrian. Click here for further analysis in our column "The Week in Automated Driving".] 

Inevitably, the accident evokes memories of the fatal Tesla crash in 2016 when a Model S electric sedan car set to Autopilot collided with a truck in Florida, killing the Tesla driver. The difference, however, is that the Autopilot is merely a driver assistance system with level 2 automation. A report later concluded that Brown had received multiple warnings to retake the wheel. The Uber accident now marks the first time a vulnerable road user has been killed by a fully autonomous car.



While the police has already exonerated Uber, the question remains if the technology functioned as it should have. Notably, the autonomous car was driving above the speed limit of 35 miles per hour. According to Wired, conditions on Sunday were warm with little wind – which means nothing should have blurred the car’s vision. Could the car have avoided the pedestrian by swerving into another lane?

Critics caution that the industry is rushing premature technology onto the roads. “The car cameras, the vision systems, they don’t perform inductively, meaning they can’t guess about the appearance of someone in a particular place and time,” Missy Cummings, robotic expert at Duke University, told the Washington Post. It begs the question: should autonomous cars be kept out of inner-city areas as long as they cannot cope with all facets of urban traffic – like pedestrians emerging “from the shadows”?

Others will argue that this very crash proves the need for further real-life testing in those areas. Road testing is the only way the systems can learn and adjust to their environments, eventually reaching a level of safety that cuts down on the number of motor vehicle deaths overall, Timothy Carone, associate teaching professor at University of Notre Dame, told the Post.



It’s no coincidence that Uber and its competitors like Waymo have been heavily testing autonomous vehicles in Arizona. The state has very actively been courting driverless car companies and is known for its somewhat ‘light’ regulation. “Arizona welcomes Uber self-driving cars with open arms and wide open roads,” Governor Doug Ducey said in 2016. “While California puts the brakes on innovation and change with more bureaucracy and more regulation, Arizona is paving the way for new technology and new businesses.”


A battle of de-regulation has unfolded between U.S. states like Arizona, California and Nevada. Both Arizona and California have recently passed laws that allow companies to test autonomous vehicles on public roads without a backup driver present – if they meet certain safety requirements. It will be interesting to see how the Tempe incident could affect such a rule – especially since U.S. Congress is considering a national framework that would preempt states from establishing their own laws. “This tragic incident makes clear that autonomous vehicle technology has a long way to go before it is truly safe for the passengers, pedestrians, and drivers who share America’s roads,” U.S. Senator Richard Blumenthal said in a statement. “In our haste to enable innovation, we cannot forget basic safety.”



Does the aforementioned “haste to enable innovation” increase the risk of fatal accidents? There is no evidence at the moment to suggest so. But as more and more public testing gets underway, the statistical likelihood of further road casualties caused by autonomous cars increases. Professor Armin Grunwald, a renowned technology assessor for the German parliament, said in an interview with 2025AD in 2016 that “at some point, this complex technology in some specific situation will have a malfunction and cause an accident. And people will be harmed by it. This will spur debates.” At the same time, Grunwald predicted that “if autonomous driving can reduce a large proportion of those road deaths, the public will accept it.”“One thing seems certain,” 2025AD wrote after the fatal Tesla accident in 2016. “The reactions to the fatal crash will show just how willing both consumers and lawmakers are to trust in future technologies.” We know today that they didn’t reject the technology as a whole, just like Armin Grunwald predicted. But the entire driverless car industry will have to work hard to ensure the public doesn’t lose their trust. The only way to do that? By being transparent, by admitting mistakes – and by only putting safe technology on our roads.


<div id="hs_cos_wrapper_Engage_" class="hs_cos_wrapper hs_cos_wrapper_widget hs_cos_wrapper_type_inline_text" style="" data-hs-cos-general-type="widget" data-hs-cos-type="inline_text" data-hs-cos-field="submit_your_story.icon_text">Letter</div>
Submit your story

Become part of our automated revolution and submit your stories, images and videos

Submit your story
Stay informed

Stay up to speed with our mailing list. Enjoy automated driving content direct to your inbox

Join our weekly briefing
Connect with us

Follow us on our social networks for up to date information and thoughts on automated driving