2025AD Homepage

The aftermath of the crash: Is Uber’s driverless tech truly ready?

Pedestrian identification is still a feat for self-drive systems. (Photo: Uber)

Article Publication Meta Data

2025AD Team
2025AD Team

Article Interactions

6 shares
Rate this article on a scale of 1 to 5.
5 2 votes
0 comments
comment
0 views

Hello, automated driving community! Video footage questions Uber’s sensors and Arizona is the Wild West of AD testing: Everything you need to know in the aftermath of the fatal Uber crash.

For a moment, it looked like Uber could breathe a sigh of relief. After the fatal crash of a self-driving Volvo SUV in Tempe, Arizona, local authorities were quick to exonerate the ride-hailing firm, saying that the collision with a 49-year-old woman walking her bike across the road “would have been difficult to avoid”.

Well, now it seems the public debate has taken a U-turn. As Uber and Toyota instantly suspended their driverless testing projects, two videos released by the Tempe Police Department started to raise questions about the readiness of certain aspects of the vehicle’s technology.

A four-second dash cam video shows the Uber driving down a dark and largely empty road before it suddenly collides with a person pushing a bike who walks into the car’s path. Another video from inside the car shows the 44-year-old operator gazing at her lap instead of paying attention to the traffic. Only in the moment of the impact does she look up in shock.

The footage only increases the pressure on already scandal-ridden Uber – evoking three key questions. First: Why did the Volvo – decked out with high-end sensors to read its surroundings in real time – fail to brake or swerve? Second: Did the operator neglect her duties as a safety driver by taking her eyes off the road? Third: Was it human-failure – mistakenly activating the series production cruise control instead of the car’s autonomous mode?

Sensor software: Ready to read the road?

Dry weather. No traffic. A safety driver at the wheel. Conditions could not have been better for the autonomous Volvo. The fact that it was nighttime is irrelevant: While darkness can impede camera vision, radar works equally well in day and night (LiDAR sensors even perform better at night due to reduced glare). The victim was also within the sensors’ reach. So, why the mistake?

“Autonomous vehicles make decisions based on what their sensors detect,” Bart Selman, an IT professor at Cornell University, is quoted by Scientific American as saying. “But if its sensors don’t detect anything, the vehicle won’t react.” This implies it was either a sensor fault that led to the error or that the software failed to interpret the sensor input correctly: “You need to distinguish the difference between people, cars, bushes, paper bags and anything else that could be out in the road environment,” Matthew Johnson-Roberson, an engineering professor at the University of Michigan, told Bloomberg. While it’s possible that the sensors spotted a roadside object, the algorithms may have failed to identify the odd combination of the woman, her bicycle and her shopping bags as a human being. It’s likely that if the software had recognized her as a pedestrian, it would have slammed on the brakes – preventing or significantly mitigating the crash.

Pedestrian identification has long been a pioneering feat for self-drive systems – and the Tempe incident shows that it still is. The race to bring fully driverless cars to the masses asap has pushed firms and U.S. lawmakers alike to take risks. But are they rushing too much by putting partly immature sensor tech on public roads? Industry players should be cautious not to enter a dangerous phase in the race for autonomy: a phase in which cars are not fully autonomous, but humans are no longer fully engaged in monitoring them. Sure, after intensive use of simulation, real-world testing is the only way for self-driving tech to mature and help make our roads safer. But without a watchful human attendant present? “We’re not there yet,” Matthew Johnson-Roberson reminds us.

Welcome to the Wild West

While a possible sensor malfunction is under investigation, the video also hints at potential shortcomings in Arizona’s driverless legislation. In a quest to make the state an AD testbed, Governor Doug Ducey loosened its regulatory requirements by executive order in early March.

One of the key points: Allowing driverless cars to operate without a safety driver behind the wheel. “Testing or operation of vehicles on public roads that do not have a person present in the vehicle shall be allowed only if such vehicles are fully autonomous,” the document reads. Besides some other caveats, the cars must be “equipped with an automated driving system that is in compliance with all applicable […] safety standards.”

Arizona regulations don’t explicitly demand firms to keep a backup driver. Yet Waymo and Uber officially adhere to having a safety person ride shotgun or sit in the backseat. Are they simply there as a fallback mechanism should things go awry? If yes, then this system was of no use in this case. Different media, among them Curbed and the Daily Mail, state that the operator was supposed to re-take control in the event of an emergency. According to Tempe’s police, she was not impaired in the moment of the crash and therefore could have reacted, possibly preventing the crash.

Or was it human failure after all? The cause of the accident could potentially have been human-machine miscommunication – a possibility that hasn’t been publicly discussed yet. Police found that the Volvo was speeding at the moment of the crash – going 38 mph, although the limit was 35 mph. What if the human operator wanted to activate the Volvo’s autonomous driving mode, but mistakenly enabled the series production cruise control instead? A key dimension for the acceptance of AD is clear Human Machine Interaction (HMI). Future vehicles will have to indicate the current driving mode and possible road scenarios to their drivers – including the request for them to re-take the wheel.

Shortly after the incident, Consumer Watchdog slammed Governor Ducey for allegedly turning Arizona into the “Wild West” of driverless testing. The NGO may have a point there: The incident shows that for the sake of safety, a well-considered regulation is certainly better than (almost) no regulation. For instance, Baidu has just been granted permission to test driverless cars in Beijing, China. But the license is tied to a couple of conditions: Their technology must undergo over 5,000 km of training and evaluation, plus their backup drivers must train for at least 50 hours.

On the road to autonomy, public trust and societal acceptance will not happen overnight – it’s safe technology that will yield them. So let’s take on the autonomous vision step by step, instead of going all in at a single stroke.

So long, drive safely (until cars are driverless),

The 2025AD Team 

Article Interactions

6 shares
5 2 votes
Rate this article on a scale of 1 to 5.