2025AD Homepage

"Autonomous zombies are not an option"

Who will be responsible if driverless cars crash? (Photo: Fotolia / Gena)

Article Publication Meta Data

Prof Rafael Capurro
Prof Rafael Capurro
Show Author Information

Article Interactions

16 shares
Rate this article on a scale of 1 to 5.
4.75 4 votes
2 comments
comment
0 views

Relieve the drivers - but don't replace them: In an interview, philosopher Rafael Capurro explains why we must be willing to assume responsibility for the actions of driverless cars.

This interview with Professor Rafael Capurro, an information ethics expert, was first published in the German fleet managing journal Flottenmanagement. The German original can be read here. You can watch a video debate between Professor Capurro and Continental CEO Elmar Degenhart here.

Flottenmanagement: Professor Capurro, automated driving raises not only technological challenges but also ethical and moral questions. Let’s start with the dilemma that is probably best known. How should an autonomous vehicle react if it is facing the decision whether to save the vehicle’s passengers or pedestrians? On which criteria should the vehicle base its decision?

Prof. Rafael Capurro: We have to distinguish between automated and assisted or fully automated and autonomous driving. Ethical questions arise particularly where algorithms with moral and legal rules fully replace the driver. With morality, I mean implicit or explicit customary rules (Latin: mores = customs) based on the values and principles of the good life. They are the principles underlying every society. By contrast, ethics is the philosophical and also everyday reflection on those principles.

An algorithm follows predefined rules. People can choose between open, incalculable possibilities and are able to reflect upon such rules. That is why they are the only ones who can take responsibility. It’s virtually impossible to know every possible situation in advance and preprogram it accordingly. The ethical problem of autonomous driving is not the calculability. The problem is to even believe in calculability. This simulates an unrealistic level of safety.

The citizens must assume responsibility

The ”best-known” dilemma that you mentioned results from determining certain rules for standard situations that could turn out completely differently in reality. You can program algorithms that take rules of current morality and law for such standard situations into account. But a rule that focuses on protecting the passengers could – in a non-predictable situation – even endanger them. If neither carmakers, programmers nor vehicle owners accept moral responsibility for the consequences of an accident caused by an autonomous vehicle; then we are seemingly in a state of amorality.

I use the term ”seemingly” because each individual and society as a whole has the responsibility to deal with these vehicles. Society has to decide whether it wants to embrace the technology. The citizens, who are ultimately the beneficiaries and victims of this technology, must be willing to take the risks and assume responsibility for it. You cannot replace this political and societal decision with an algorithm.

Programming driverless cars - a moral challenge. (Photo: Fotolia / madgooch)

Considering the current state of technology, autonomous vehicles should only be permitted if such dilemmas are almost impossible. But that will not be the case as long as there are no digitized traffic management systems that remove some of the autonomy from an autonomous vehicle.

This poses the question for whom and for which purposes automated and autonomous driving make sense. There are two extremes: a vehicle where all responsibility falls to the driver or an autonomous vehicle. Between those extremes, there are many possibilities.

Automated vehicles that assist the driver to assuming responsibility in a technically enlightened way can be oriented tendentially toward not only relieving, but replacing the driver.

Will we achieve societal acceptance? Only if autonomous driving makes certain everyday situations, especially in urban traffic, safer and more convenient for all road users: passengers of autonomous cars, as well as drivers of conventional cars, pedestrians, cyclists and so forth. The question also arises as to whether all road users in the near or more distant future will adapt adequately to the changes. Experience shows that such a process can take time.

 

Flottenmanagement: Christoph von Hugo, Senior Manager Active Safety at Daimler Passenger Cars, said at the Paris Motor Show that an autonomous car should prioritize its passengers’ safety over that of other road users in the case of an accident. He reasoned that in complex accident situations you can’t predict what happens to the people you initially saved. Although Daimler has already backpedaled here a little, the statement remains. Would a vehicle that in case of doubt decided against its own passengers be marketable at all?

Prof. Rafael Capurro: Strictly speaking, the vehicle doesn’t make decisions. The anthropomorphic phrases in that area are misleading because they suggest that a driverless, but not unguided, vehicle would be an autonomous being. A being that itself sets the rules for its own actions, including its goals, and has an eye on the well-being of society as a whole. But autonomous vehicles are not members of human society and cannot impose rules on themselves.

Will programmed rules save pedestrians - or endanger them? (Photo: Volvo)

It’s the carmaker who assures buyers that they shouldn’t worry, claiming that,  in case of an accident, saving passengers will have precedence over other road users. That is not a proper ethical criterion for the good life since it exempts the car’s driver from responsibility for other road users. The risk that the algorithm would decide against the passengers’ safety if necessary is ethically questionable. But the assurance that passengers are protected first is just as questionable – and indefensible. Buyers and passengers need to live with the uncertainty of the non-programmable and unpredictable – and ultimately even die. We must not accept a law of the strongest. Autonomous vehicles could create a situation where no one is responsible anymore.

Even if accident numbers decreased due to a massive deployment of autonomous vehicles – which is unpredictable – the question of responsibility would still be open. Automated vehicles would be a different case. Ultimately, you cannot buy safety by abandoning liberty and risk.

Compared with this, automated vehicles and mobility systems that minimize the risk of (lethal) accidents as far as possible would be marketable. Human failure cannot be ruled out. A vehicle that is based on such dilemmas is ethically unacceptable since it challenges the foundations of social life.  This is surely not something we want.

Should we make driverless cars utilitarian?

Flottenmanagement: According to a survey carried out by the psychologist Jean-François Bonnefon, a majority of the 2,000 respondents said they would prefer vehicles with a utilitarian programming code. Are they right? How should an autonomous vehicle act in your opinion?

Prof. Rafael Capurro: The question is: what is a ”utilitarian programming code”? There are many varieties of utilitarianism, for instance rule utilitarianism, act utilitarianism or preference utilitarianism. On top of that, there are other ethical theories such as the ethics of goods, ethics of justice, deontological ethics or ecological ethics, to name just a few. Such ”programming codes” furnish the vehicle with a feigned autonomy. In truth it’s not a vehicle anymore but something ghostly, a technological zombie that simulates knowledge and responsibility. Such zombies can be hacked and used as killing machines. That’s why the question of security is just as important as the safety of passengers and other road users.

Will automation assist drivers - or replace them? (Photo: BMW)

Flottenmanagement: Who should decide how autonomous vehicles act? How could international and intercultural standards be created? In that regard, what do you think of the ethics committee that the German Transport Minister Dobrindt has initiated?

Prof. Rafael Capurro: Autonomous vehicles don’t make decisions and they don’t act. They follow rules. We need to have agreement on the terminology first. Then we can consider if, when and for what purpose those vehicles make sense. The ethics committee of Mr. Dobrindt has assumed this task. But we also need a broad philosophical, scientific and public discussion on the future of mobility. The ever-increasing density of traffic in metropolitan areas and megacities poses tremendous challenges: how can we even guarantee that we can move our goods safely, smoothly and seamlessly across the globe?

When it comes to international and intercultural standards, I think we primarily need to consider in which areas risks are higher than the presumed benefits. Algorithms that are coded based on the specific environments or different traffic regulations need to be tested in intensive and intelligent trials. This rules out the immediate commercialization of autonomous zombies. They could undermine the public’s trust in autonomous driving altogether and prevent existing mobility systems from improving and becoming safer. The economically realistic motto is: relieve drivers, don’t replace them. I say this with caution. Nobody is able to predict what mobility will look like in twenty, fifty  years, let alone a hundred years.

Are we creating "autonomous zombies"? Should cars only assist humans? What do you think? Share your thoughts in the comment section!

Article Interactions

16 shares
4.75 4 votes
Rate this article on a scale of 1 to 5.

Article Publication Meta Data

Prof Rafael Capurro
Prof Rafael Capurro
Show Author Information

Related Content

Quick Newsletter Registration!

Subscribe to our monthly newsletter to receive a roundup of news, blogs and more from 2025AD directly to your mailbox!