2025AD Homepage

The driverless car dilemma: morality influenced by culture?

Who should a driverless vehicle protect? (Photo:CC0 1.0)

Article Publication Meta Data

René Tellers
René Tellers

Article Interactions

Rate this article on a scale of 1 to 5.
4.2 5 votes

The biggest ever survey on machine ethics raises more questions, Waymo has a friendly talk with the police and GM’s plans are in jeopardy: read our weekly analysis of the most important news in automated driving!

It takes just two questions to describe the so-called social dilemma of self-driving cars. First, should an autonomous vehicle protect pedestrians even if it meant sacrificing its passengers? In a 2016 study of Bonnefon et al. (which can be found in our studies database), a majority of respondents said yes. Second, should a self-driving car that you own be programmed accordingly? A clear majority said no. This paradox led the researchers to create the biggest machine ethics survey ever.

The Moral Machine platform laid out 13 scenarios in which someone’s death was inevitable – varying age, gender, socioeconomic status and the number of people involved. 2.3 million people from 233 countries participated. The study, which is not representative because male respondents were over-represented, was just published in Nature. The most striking one: moral choices are not universal. Well, some are. Regardless of their age, gender or country of residence, most people spared humans over pets, and groups of people over individuals. But digging deeper, people from North America or Europe were more likely to sacrifice older lives to save younger ones than people from Japan, Indonesia or Pakistan. The survey also found that choices often correlated with the level of economic inequality. Residents from Finland, a country with a small gap between the rich and poor, showed little preference for swerving one way or the other. People from Colombia, however, chose to spare the person with the higher status.

Bryant Walker Smith, legal expert from the University of South Carolina, questioned the practical use of the study, telling Nature: “I might as well worry about how automated cars will deal with asteroid strikes.” Walker Smith is right: these are highly hypothetical scenarios. Still, I consider this a groundbreaking study because it shows the global impact automated driving will have on our society. It’s a worldwide matter which demands common standards but defies easy answers. If this study helps to further ignite a global conversation then that in itself is a good act.

Woop Woop: It’s da sound of da police!

Yet another grey legal area of automated driving: law enforcement and driverless vehicles. What happens when an officer wants to pull over a driverless car with no human backup driver in it? Luckily, Waymo has just released an “Emergency Response Guide” which details how to deal with such situations.

Waymo's driverless vehicles are able to identify police or emergency sirens. (Photo: Adobe Stock / Артем Константинов)

According to Waymo, the vehicle uses its sensors to identify police or emergency vehicles by detecting their appearance, sirens and emergency lights. The vehicle is then designed to pull over at the nearest safe stop. It will unlock its doors and allow the officer to check the in-vehicle documents (hidden under the sun visor), to switch off self-driving mode or to reach Waymo’s support hotline by pressing a help button in the interior console.

Don’t get me wrong, it’s positive that Waymo considers aspects of law enforcement in their vehicle development. Still, their whole approach strikes me as slightly patronizing, by basically telling police: this is our car and this is how you deal with it. I’d much rather see law enforcement and legislators lay out their requirements – and then discuss with the industry how to safely implement them.

GM: Taking the rough with the smooth

Part of the reason why Waymo is actively suggesting emergency response guides is their forerunner status: they have by far the most driverless vehicles on the road. One of their biggest competitors, General Motors, along with its autonomous driving unit Cruise Automation, is apparently struggling to keep up. According to a Reuters report, unexpected technical challenges mean GM’s plan to put a driverless ride-hailing service on the roads of San Francisco in 2019 is highly unlikely.

General Motor's self-driving car plans are in jeopardy. (Photo: Adobe Stock / rgbspace)

“Nothing is on schedule,” one GM source told Reuters. Allegedly, its cars still struggle to identify whether objects on the road are moving or stationary. At times, the software failed to recognize pedestrians and mistakenly detected “phantom bicycles”, causing the cars to brake erratically. In addition, the cars are not yet capable of responding to fire truck sirens (kudos to Waymo!)

Bad news is never welcome, but for GM, it arrives particularly ill-timed. Two companies, Japan’s SoftBank and Honda, have committed to investing five billion U.S. dollars in Cruise. But the delivery of capital depends on Cruise achieving certain performance targets. With their San Francisco goal in jeopardy, so is their cash infusion. Financing the development of driverless cars is a huge gamble: enormous costs upfront with no guarantee that the investments will monetize themselves in the future. SoftBank told Reuters that it factored in setbacks for the technology. But it will be interesting to see just how faithful investors around the globe remain as more setbacks inevitably arise.

So long, drive safely (until cars are driverless),

René Tellers,




Article Interactions

4.2 5 votes
Rate this article on a scale of 1 to 5.