2025AD Homepage

The ethics of autonomous driving: Quo vadis?

Whose safety should a driverless car prioritize? (Photo:CC0 1.0)

Article Publication Meta Data

Prof Hermann Simon
Prof Hermann Simon
Show Author Information

Article Interactions

23 shares
Rate this article on a scale of 1 to 5.
4.36 22 votes
3 comments
comment

The bottleneck for the social acceptance of automated driving lies in the ethics, says mastermind Professor Hermann Simon in an opinion piece for 2025AD.

The topic of autonomous driving is of paramount importance to Germany. After all, it is German companies, rather than Google or Tesla, that are leading in this field. Of the 2,828 autonomous-driving related patents registered worldwide since 2010, 1,646 – or 58 percent – are from Germany. With that in mind, it seems fitting that we are one of the first countries to have a commission concerned with the ethics of autonomous driving. Under the direction of the former judge of the Federal Constitutional Court of Germany, Prof. Udo di Fabio, the commission also includes highly competent experts, such as former SAP CEO and current President of the German Academy of Science and Engineering Prof. Henning Kagermann.

Although I am neither a car expert, an IT expert, nor a lawyer; the ongoing discussion led me to write the following opinion piece. I was particularly struck by the reported recommendation of Swiss computer scientist and philosopher Oliver Bendel, "to keep the car relatively unintelligent and only permit autonomous driving in designated areas, such as motorways." Perhaps this is a "real" problem as defined by Murphy. "Real" Murphy problems are those for which there are no solutions.

Should we keep driverless cars "unintelligent"?

According to Kagermann, in the ethics committee "it is generally undisputed" that property damage should always be preferred over personal injury. This principle is unlikely to be contradicted in society. Yet, when it comes to implementation, serious problems can arise. There are certain situations in which the principle can clearly be applied. For example, if the option is to hit a person or hit an animal, there is no discussion. However, the overwhelming majority of accidents do not present such a clear separation between damage to property and injury to people. Even such comparatively harmless grey areas point to the fundamental problem, and that it will be extremely difficult to establish general principles. 

Should algorithms decide over life and death? (Photo: iStock/ the-lightwriter)

This is clearly reflected in the second principle currently being discussed by the commission. Kagermann describes it as follows: "In cases of doubt, the car must protect pedestrians before vehicle passengers". For my reasoning, I am assuming the positive case: that the car has all the information it needs to identify exactly who is at risk and what the consequences of a certain action will be. However, such a positive case will probably not be likely for a long time, as was demonstrated by the recent Tesla accident where a bright truck could not be distinguished from the bright sky. But be that as it may, the technology is improving, and I do not believe that autonomous driving will fail due to technology.  

Coming back to the second principle, here’s an example to consider: Let’s assume that the pedestrian is a ninety-year-old with poor health (and the car knows that!) and that the car passengers are three ten-year-old children. The car is driving at high speed and there are only two options: to hit the ninety-year-old pedestrian or kill the three children from the impact of hitting a tree. According to the principle, the system would have to drive the car into the tree. As this case shows, the second principle can lead to absurd outcomes. 

A deeper question is the following: what would human drivers do in such a situation? The answer is, no one knows, not even the drivers. They wouldn’t have time to make a lengthy, conscious, well-balanced decision; rather it would be an instinctive reaction (whatever that might be). What is to be expected is that they will typically try to protect their own life and that of their passengers. A court must then later decide on guilt or innocence. Could it be that the second principle mentioned above even contradicts "normal" human behavior - and if so, what does that mean?

The fatal Tesla accident dominated the headlines in 2016. (Photo: Reuters)

Based on what has so far been reported, the commission is refusing to put a value on human life. In the words of Kagermann, "a quantification of human life is unacceptable." He also explains that a car should not automatically choose the option where an individual rather than a group would be killed. To my astonishment, he calls this question "very theoretical". But what is theoretical here? In many fatal accidents, it is exactly this issue that is at stake, for example the issue of several occupants in a vehicle. 

How can a system decide whom to kill?

When it comes to autonomous driving, the system has to anticipate every conceivable situation. A real driver does not do that. Instead they decide, consciously or unconsciously, as and when a situation arises. So how can a system work if we cannot quantify human life? How can the system decide whether to kill the ten-year-old child (two ten-year-old children) or the ninety-year-old? It is suggested that a car "stays on its course" in such situations. But this is also a decision which includes an implicit quantification. Alternatively, you let the decision be made by a random generator; for example, the child is the victim in 50 percent of the cases, the ninety-year-old in the other 50. How does this help? If we choose different probabilities, say 80 percent for the ninety-year-old, this is already a form of quantification.

Driverless cars will anticipate upcoming traffic scenarios. (Photo: HERE)

We’re constantly being told that autonomous driving is or at least will be safer than human driving. I believe in this argument. But what about the following analogy from medicine: let’s assume that a new drug will verifiedly save a million lives, but it is also proven that it kills a thousand people. With such a mortality rate, the new drug would have no chance of being approved. Could autonomous driving meet the same fate? May it objectively enhance traffic safety and yet will not be socially accepted. Just imagine the following headline: "Autonomous car kills three children".

Life or death decisions should be left to humans

Among the choices that the autonomous driving system has to make are those involving life or death. Such situations can also arise during medical operations. In this case, would such a serious decision be left to a surgery robot? Or would the surgeon have to decide? Or, to provide a more remote analogy, what if a decision had to be made regarding a death sentence?

My conclusion: I am not saying that autonomous driving will fail. Technically, I believe in it. The objective data will also work in favor of these systems. However, as in medicine, this might not be enough. The bottleneck for the social acceptance of this new technology lies in the ethics. And human drivers decide differently to the way today’s systems are able to. The proposal by Oliver Brendel to keep the system "relatively unintelligent" will ultimately result in there being no real autonomous driving. If my considerations are correct, it would follow that efforts would be directed more towards the interaction between man and machine, rather than towards "fully autonomous" driving. Certainly, all types of assistance systems that increase driving safety are extremely useful, but perhaps, when it comes to life and death, the ultimate decision should be left to humans.

Article Interactions

23 shares
4.36 22 votes
Rate this article on a scale of 1 to 5.

Article Publication Meta Data

Prof Hermann Simon
Prof Hermann Simon
Show Author Information

Related Content

Quick Newsletter Registration!

Subscribe to our monthly newsletter to receive a roundup of news, blogs and more from 2025AD directly to your mailbox!