2025AD Forum: ethics of autonomous cars
Automated driving is a powerful tool to reduce traffic fatalities, but societies still grapple with moral challenges as illustrated by the trolley dilemma: a discussion between Dr. Joachim Damasky, Prof. Reinhard Merkel and Dr. Felix Gress.
Imagine introducing a transport system that increases freedom and prosperity for all, yet has one disadvantage: an average death toll of 3,200 individuals per year. Surely, this would not be socially acceptable.
Yet this is the current state of road traffic in a highly industrialized country like Germany as it developed over the past decades. The US are faced with 35,000 and Japan with 3,900 deaths from car accidents every year. In fact, thanks to driver assistance programs such as anti-lock braking systems (ABS) and electronic stability programs (ESP) as well as passive safety systems such as the seat belt and airbags, it is a record low for car-loving Germany where the yearly rate of traffic fatalities peaked at 21,000 in 1970.
A moral imperative
Reducing the number of traffic fatalities is a moral imperative and automated driving is a powerful tool to do so. “It is a proven fact that, for example, a modern car’s autonomous emergency braking (AEB) significantly reduces the number of accidents it is involved in,” said Joachim Damasky, managing director at the German Association of the Automotive Industry during 2025AD’s recent panel at the International Motor Show (IAA) in Frankfurt, Germany. “It also reduces the impact an accident has on the passengers.” The panel discussion, followed by a lively debate, was moderated by Continental’s head of corporate communication and public affairs, Felix Gress.
Yet automated driving is faced with several challenges. One example is the so-called trolley dilemma, in which a run-away trolley threatens to kill a group of people. The dilemma is: if an independent person had the ability to reduce the number of victims by diverting the trolley away from the group onto an individual, should he do it or not? In automated driving, the question is how should an autonomous car react in such a situation?
However unlikely it may be, this scenario highlights a fundamental tension between two schools of moral thought. The utilitarian perspective dictates that the most appropriate action is the one that achieves the greatest good for the greatest number. The deontological perspective argues that certain actions are wrong, even if they have good consequences.
In Germany, diverting the trolley would be illegal. “An independent third party, who happens to be in such a situation, must not make a decision that sacrifices one specific person to save a larger number of people,” Reinhard Merkel (University of Hamburg) said. “This one person has no obligation whatsoever to sacrifice his or her life for others, even if it is a group of several people.”
The difference between man and machine
Knowing the persons’ identity is key. Whether an automated car should be pre-programmed to hurt the individual instead of the group is an entirely different matter, said Merkel, a professor emeritus of criminal law and philosophy of law. The question then becomes more abstract: How does a society choose to handle a remote, abstract and highly unlikely risk?
Merkel, a member of the ethical council advising the German government, has an opinion. “I believe it is right to program a machine to reduce casualties in such a dilemma.” He argues the machine should be allowed to do something humans are not because the decision has been reached months or even years before the incident occurs.
Merkel compares this to large scale-immunization, which is socially acceptable despite the fact that side effects may harm some individuals. “Policymakers are right to force such collective risks onto a society if the identity of individuals who may be harmed is unknown,” he argues.
Opponents of automated driving may cringe, but Merkel is hardly a car lobbyist. The former Olympic swimmer is widely lauded as a critical thinker. And he is far from bluntly advocating autonomous driving.
Are you a risk factor?
Autonomous driving also raises complex questions such as “If automated driving has reached a level of technical proficiency and is widely accepted, will taking the wheel become an act of negligence?” There may be situations where humans should not be allowed to drive at all, if a society agrees that machine driving is safer for all. “Some people will perceive this as a deprivation of liberty,” Merkel warns.
While the trolley dilemma may be the most prominent, the challenges on the way to widespread automated driving abound. How about people who wish to drive for pure pleasure, as is often the case with motorbike riders? Being the most widespread cause for accidents, will individual driving become illegal? What if the least harmful outcome is for the car to drive into a building? Will buyers be willing to accept the potential of their car’s – and possibly their own – self-destruction?
Among many other challenges, the notion of insurance should be rethought in an effort to reduce traffic fatalities, argues Damasky. Clearly, the potential of automated driving to reduce the number of accidents is worth the effort, Damasky stresses.
In light of breathtaking technological innovation, we may even have to rethink our relation to those driving next to us. As cars become ever more intelligent, drivers may face a moral obligation to share their insights. This means that older vehicles, which share the road with automated cars, will have to be upgraded to communicate data such as position, speed and any dangers lurking behind the next corner. “Otherwise, they won’t be ‘visible’ to the automated cars around them,” says Damasky.
Automated driving will undoubtedly make roads safer, yet societies need to solve moral dilemmas now to facilitate the acceptance of new technologies.