2025AD Homepage

Automated driving: Are we approaching a moral crossroads?

Steering towards some difficult ethical questions? (Photo: Fotolia)

Article Publication Meta Data

Wendell Wallach
Wendell Wallach
Show Author Information

Article Interactions

10 shares
Rate this article on a scale of 1 to 5.
4.67 12 votes
0 comments
comment

A moral imperative? An ethical dilemma? The introduction of self-driving cars will bring more than just technological challenges. A contribution by Wendell Wallach.

Wendell Wallach chairs the Technology and Ethics Study Group at Yale University. In his guest contribution he raises some ethical questions relating to autonomous driving to which we, as a society, owe at least some thought as we move forward. 

If, as their developers contend, self-driving cars radically reduce traffic accidents and fatalities, then their adoption is morally acceptable and truly beneficial. However, autonomous vehicles also pose innumerable ethical challenges and will have societal impacts that diminish, but will certainly not offset all the benefits. The greatest of these challenges will arise when and if self-driving cars prove to be successful.

A moral obligation?

According to research performed by the U.S. government's National Traffic Safety Board from 2005-2007, human error is a factor in 93% of automobile accidents. Inattention, distraction, or fatigue commonly cause or exacerbate errors. This alone suggests that the single-minded attention of a car's computer on the road will dramatically reduce accidents. Furthermore, the time it takes a driver to recognize and react to a dangerous situation can be anywhere from a quarter of a second to much longer. An automated car with sufficient sensors and well-designed software can hit the brakes in a matter of milliseconds (thousandth of a second), presuming that the car recognizes the dangerous situation as well as an attentive human would. By a simple utilitarian calculation, this means the benefits of AVs far outweigh any costs, and that there is a moral obligation to ease the way for a speedy adoption of autonomous vehicles (AVs).

However, most people believe that specific moral considerations often trump merely weighing benefits against costs and risks. For example, the obligation to care for and not harm children in the eyes of many takes precedence over all other concerns. People also place particular importance on the freedom and autonomy of the individual, or concern that equality and justice for the needy is not sacrificed on the altar of maximizing what is good for the majority. Furthermore, the availability of AVs is likely to increase the number of private cars on the road and therefore have environmental impacts (more cars = higher net emissions and require more raw materials to build), while combating climate change is a priority for some.

Programming ethical decisions

Therefore, the moral challenges self-driving vehicles pose go far beyond the much-publicized, updated iteration of the classic "trolley" problems, where the car must decide whether to take an action that kills the driver rather than a number of children or pedestrians. And yet such unusual situations, put forward by Gary Marcus and Patrick Lin, underscore the fact that autonomous vehicles (AVs) will confront difficult choices, may kill different people than a human driver would, and pose serious questions as to how they should be programmed. Would you, for example, buy a car that was programmed to drive off a cliff rather than injure a number of civilians?

The importance of such unusual situations is that they illustrate that driving is not a bounded moral context. In a bounded context it is sufficient to program the AV to follow straightforward traffic rules such as stop at a stop sign, or look for a child and prepare to brake if you spot a ball on or near the road. Driving poses many open-ended situations where understanding social customs and adaptive behavior is required. For example, drivers must accurately interpret the subtle gestures, including nobs and winks, when they encounter a police officer directing traffic. Or consider how a self-driving vehicle should handle an intersection with other drivers at a four-way stop. Usually a complex social ritual ensues where drivers look at each other, nudge their vehicles forward, and engage in other behavior to determine who enters the intersection first. Understanding these forms of behavior is very difficult to program into a car, particularly when some drivers might even be inclined to trick the AV into an accident.

How will self-driving cars handle intersections? (Photo: pawel.gaul / iStock)

Unchartered moral territory

In my opinion machines making explicit real-time decisions about the life and death of humans is a form of evil. It is evil because computers lack true discrimination and cannot be held responsible for their actions. Whether self-driving cars, however, are actually making a decision or merely delegated to act upon a decision made by an agent (either individual or corporate), who is morally responsible and potentially culpable for harm, is a more difficult problem. That problem is presently being debated by the United Nations in a different context, whether to ban lethal autonomous weapons.

Whether a self-driving vehicle should sacrifice a driver and passengers to protect others is a new, if not entirely unprecedented challenge. Furthermore, what to do will vary from situation to situation depending, for example, on the number and ages of those in the automobile or about to be hit by the vehicle, and whether the computer even has this information. Drivers or AVs lack all the information they need, have inaccurate information, and cannot determine all the consequences of the various actions they might take. Therefore we have the languages of ethics to help navigate our uncertainties. In the case of AVs there will be a need for a societal conversation as to what the vehicle should do when confronted with unusual situations, and those conversations should continue until new norms that have the support of a consensus of citizens emerges.

Relenting driving privileges

AVs will force adjustments in the expectations of human drivers, driving habits, and laws. But let us imagine that large numbers of driverless vehicles have been deployed on highways and city streets, and the accidents they are involved in, and fatalities they cause, are significantly below those human drivers cause. There will be proposals for additional technologies, such as communication standards so that AVs can increase safety or lower traffic congestion by coordinating their activities. More importantly, safety-conscious citizens will demand that humans give up the privilege of driving. The debate over whether to fully implement this final stage in the deployment of AVs is likely to be much more disruptive than the introduction of fully autonomous cars. That debate will pit people with different values, or who prioritize values differently, against each other. The proposal that drivers give up the privilege of driving in the interest of the majority will create a full-scale societal and ethical conflict.

As well as chairing the Technology and Ethics Study Group at the Yale Interdisciplinary Center for Bioethics, Wendell Wallach is senior advisor to The Hastings Center - an independent, nonpartisan, and nonprofit bioethics research institute. His recent book is A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control.

What do you think? Do you agree that the idea of a machine making critical decisions is principally evil? Furthermore, how would you tackle some of the moral questions raised here? Let us know your thoughts in the comments.

Article Interactions

10 shares
4.67 12 votes
Rate this article on a scale of 1 to 5.

Article Publication Meta Data

Wendell Wallach
Wendell Wallach
Show Author Information

Related Content

Quick Newsletter Registration!

Subscribe to our monthly newsletter to receive a roundup of news, blogs and more from 2025AD directly to your mailbox!