Explore Topics

  1. Automotive industry104
  2. Technology103
  3. Innovation89
  4. Future Vision79
  5. Challenges of AD69
  6. Road Safety69
  7. Consumer68
  8. Connectivity51
  9. Sensors50
  10. Tests50
  11. Acceptance of AD46
  12. Legal Issues42
  13. Goals of AD39
  14. Society39
  15. Infrastructure35
  16. ADAS31
  17. Politics27
  18. Highways19
  19. Science19
  20. Data Protection16

Automated driving: Are we approaching a moral crossroads?

Steering towards some difficult ethical questions? (Photo: Fotolia)

A moral imperative? An ethical dilemma? The introduction of self-driving cars will bring more than just technological challenges. A contribution by Wendell Wallach.

Article Publication Information

Blog

A moral imperative? An ethical dilemma? The introduction of self-driving cars will bring more than just technological challenges. A contribution by Wendell Wallach.

Wendell Wallach chairs the Technology and Ethics Study Group at Yale University. In his guest contribution he raises some ethical questions relating to autonomous driving to which we, as a society, owe at least some thought as we move forward. 

If, as their developers contend, self-driving cars radically reduce traffic accidents and fatalities, then their adoption is morally acceptable and truly beneficial. However, autonomous vehicles also pose innumerable ethical challenges and will have societal impacts that diminish, but will certainly not offset all the benefits. The greatest of these challenges will arise when and if self-driving cars prove to be successful.

A moral obligation?

According to research performed by the U.S. government's National Traffic Safety Board from 2005-2007, human error is a factor in 93% of automobile accidents. Inattention, distraction, or fatigue commonly cause or exacerbate errors. This alone suggests that the single-minded attention of a car's computer on the road will dramatically reduce accidents. Furthermore, the time it takes a driver to recognize and react to a dangerous situation can be anywhere from a quarter of a second to much longer. An automated car with sufficient sensors and well-designed software can hit the brakes in a matter of milliseconds (thousandth of a second), presuming that the car recognizes the dangerous situation as well as an attentive human would. By a simple utilitarian calculation, this means the benefits of AVs far outweigh any costs, and that there is a moral obligation to ease the way for a speedy adoption of autonomous vehicles (AVs).

However, most people believe that specific moral considerations often trump merely weighing benefits against costs and risks. For example, the obligation to care for and not harm children in the eyes of many takes precedence over all other concerns. People also place particular importance on the freedom and autonomy of the individual, or concern that equality and justice for the needy is not sacrificed on the altar of maximizing what is good for the majority. Furthermore, the availability of AVs is likely to increase the number of private cars on the road and therefore have environmental impacts (more cars = higher net emissions and require more raw materials to build), while combating climate change is a priority for some.

Programming ethical decisions

Therefore, the moral challenges self-driving vehicles pose go far beyond the much-publicized, updated iteration of the classic "trolley" problems, where the car must decide whether to take an action that kills the driver rather than a number of children or pedestrians. And yet such unusual situations, put forward by Gary Marcus and Patrick Lin, underscore the fact that autonomous vehicles (AVs) will confront difficult choices, may kill different people than a human driver would, and pose serious questions as to how they should be programmed. Would you, for example, buy a car that was programmed to drive off a cliff rather than injure a number of civilians?

The importance of such unusual situations is that they illustrate that driving is not a bounded moral context. In a bounded context it is sufficient to program the AV to follow straightforward traffic rules such as stop at a stop sign, or look for a child and prepare to brake if you spot a ball on or near the road. Driving poses many open-ended situations where understanding social customs and adaptive behavior is required. For example, drivers must accurately interpret the subtle gestures, including nobs and winks, when they encounter a police officer directing traffic. Or consider how a self-driving vehicle should handle an intersection with other drivers at a four-way stop. Usually a complex social ritual ensues where drivers look at each other, nudge their vehicles forward, and engage in other behavior to determine who enters the intersection first. Understanding these forms of behavior is very difficult to program into a car, particularly when some drivers might even be inclined to trick the AV into an accident.

Busy intersection New York
How will self-driving cars handle intersections? (Photo: pawel.gaul / iStock)

Unchartered moral territory

In my opinion machines making explicit real-time decisions about the life and death of humans is a form of evil. It is evil because computers lack true discrimination and cannot be held responsible for their actions. Whether self-driving cars, however, are actually making a decision or merely delegated to act upon a decision made by an agent (either individual or corporate), who is morally responsible and potentially culpable for harm, is a more difficult problem. That problem is presently being debated by the United Nations in a different context, whether to ban lethal autonomous weapons.

Whether a self-driving vehicle should sacrifice a driver and passengers to protect others is a new, if not entirely unprecedented challenge. Furthermore, what to do will vary from situation to situation depending, for example, on the number and ages of those in the automobile or about to be hit by the vehicle, and whether the computer even has this information. Drivers or AVs lack all the information they need, have inaccurate information, and cannot determine all the consequences of the various actions they might take. Therefore we have the languages of ethics to help navigate our uncertainties. In the case of AVs there will be a need for a societal conversation as to what the vehicle should do when confronted with unusual situations, and those conversations should continue until new norms that have the support of a consensus of citizens emerges.

Relenting driving privileges

AVs will force adjustments in the expectations of human drivers, driving habits, and laws. But let us imagine that large numbers of driverless vehicles have been deployed on highways and city streets, and the accidents they are involved in, and fatalities they cause, are significantly below those human drivers cause. There will be proposals for additional technologies, such as communication standards so that AVs can increase safety or lower traffic congestion by coordinating their activities. More importantly, safety-conscious citizens will demand that humans give up the privilege of driving. The debate over whether to fully implement this final stage in the deployment of AVs is likely to be much more disruptive than the introduction of fully autonomous cars. That debate will pit people with different values, or who prioritize values differently, against each other. The proposal that drivers give up the privilege of driving in the interest of the majority will create a full-scale societal and ethical conflict.

About our expert:

As well as chairing the Technology and Ethics Study Group at the Yale Interdisciplinary Center for Bioethics, Wendell Wallach is senior advisor to The Hastings Center - an independent, nonpartisan, and nonprofit bioethics research institute. His recent book is A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control.

What do you think? Do you agree that the idea of a machine making critical decisions is principally evil? Furthermore, how would you tackle some of the moral questions raised here? Let us know your thoughts in the comments.

Comments (4)

Dummy User
autonomousdriver
at 22.01.2016 17:42:28

Autonomous cars leading to MORE vehicles / emissions?

I don't get this conclusion. It's generally assumed they will spur further car sharing programs and so should lead to LESS cars. Wendell Wallach also leaves out the [Show more...]aspect that autonomous cars will reduce CO2 emissions by reducing parking search traffic and by driving more efficiently and, mainly, simply because they will be going slower than manual vehicles. [Show less...]
Dummy User
autonomousdriver
at 08.03.2016 22:06:09

two solutions to the ethical dilemma of autonomous vehicles

Last week at tech.AD in Berlin, Germany, we had a very interesting round table discussion on the ethics of autonomous cars, chaired by Baro Hyun from Hyundai Motor [Show more...]Group.

In a heated debate we came to two very interesting and in my opinion viable solution proposals if, for whatever reason, an automated vehicle comes to the decision between two options for fatal accidents.

Group #1 voted strongly for the "no decision" option. That is to say: If the vehicle "knows" already that in any case a fatality will occur, it should not take a decision to actively change its path but follow it's pre-planned path. Because the evaluation of X fatalities on the one path versus Y fatalities is morally extremely questionnable. What if 3 people run on the street causing the vehicle to veer and kill one completely innocent person because it's "only" one fatality instead of three? This would be impossible to justify towards the bereaved family.
Plus, it's likely what a human driver would do, not being able to analyze all the different options and risks anyways.

Group #2 strongly supported the "randomizer" option, which would equip the vehicle to take a randomized decision in case of two options of a similar risk is detected. This would take away the responsibility of exactly calculating the risks and evaluating the value of lives.

However, in any case we need to remember it's not the vehicle that "takes a decision" but it's the human being, programming the algorithm.
[Show less...]
Dummy User
2025AD_Team
at 14.03.2016 16:54:34

A moral crossroads?

AD may in fact raise some critical ethical concerns. Open forums like the one in Berlin, with auto professionals and other pundits discussing will hopefully help provide answers. [Show more...]Yet, besides the "no decision" and the "random choice" option, we'll need to think about a third aspect that may have to be factored in: who will buy a driverless car, if he/she knows it won't protect their lives or the lives of its drivers/occupants at all costs? Could guarding a car's occupants from harm become the overarching principle to govern a vehicle's decisions? [Show less...]
Dummy User
autonomousdriver
at 20.03.2016 21:16:58

Reply to A moral crossroads?

Autonomous cars protecting the driver's life

You're right, I actually left out this point which, in fact, was part of our discussion: the vehicle would have to be programmed to protect the life of [Show more...]the driver in any case. Because any other solution, as mathematically and ethically correct as it might be, would make the car unsellable in practice. Thanks for the reminder. [Show less...]

Leave a comment:

Comment successfully submitted.
  • Quick Newsletter Registration!

  • Subscribe to our monthly newsletter to receive a roundup of news, blogs and more from 2025AD directly to your mailbox!