Explore Topics

  1. Technology86
  2. Automotive industry86
  3. Innovation74
  4. Challenges of AD68
  5. Future Vision66
  6. Road Safety64
  7. Consumer56
  8. Connectivity45
  9. Tests43
  10. Acceptance of AD41
  11. Goals of AD38
  12. Sensors38
  13. Society32
  14. Legal Issues32
  15. Infrastructure27
  16. ADAS27
  17. Politics23
  18. Highways18
  19. Science18
  20. Urban Development15

Making automation a safe bet

Driving safely with your eyes off the road: the promise of AD. (Photo: Volvo)

Cars as safe as houses – that’s what engineers working on AD strive for, knowing that one fatal driverless accident might ruin it all. To make automation truly safe, they must remove some serious roadblocks: ensure road safety, guarantee data protection and resolve tricky ethical issues.

Article Publication Information

Cars as safe as houses – that’s what engineers working on AD strive for, knowing that one fatal driverless accident might ruin it all. To make automation truly safe, they must remove some serious roadblocks: ensure road safety, guarantee data protection and resolve tricky ethical issues.

Safety must be paramount. That goes without saying if autonomous vehicles are to become a common fixture on our roads. One spectacular accident can shatter public confidence in any new technology. Look at Zeppelin airships - the Hindenburg disaster of 1937 quickly deflated their claim to be the future of air travel.

However, concerns about safety must not stifle progress altogether. Cars are made to be driven. Risk is therefore inevitable. The question is: how can we minimize it to a level that is acceptable to all?

The Hindenburg: one fatal accident can be the end of an innovation
The Hindenburg: one fatal accident can be the end of an innovation.

A long way to vision Zero

Certainly the bar will have to be set far higher for self-driving cars than conventional vehicles. Current statistics show that more than a million people are killed in traffic accidents around the world each year. Autonomous cars will be expected to do much better than that.

Trains and auto-piloted planes perhaps offer a realistic safety standard to aspire to in a first step. Airline and train crashes may garner headlines worldwide, but the fact is they are very rare – primarily because these modes of transport offer less room for human error than cars.

Where things get tricky, however, is that autonomous vehicles will operate in a much more complex environment than both trains and planes. Relying primarily on sensors and highly accurate map data to get around, they will nevertheless routinely face all kinds of unexpected and unpredictable hazards: erratic driving by other motorists; speeding ambulances; temporary traffic signals; heavy snow. Unexpected obstacles on the road – animals, trash, a child’s soccer ball – present perhaps the biggest challenge.

It is a challenge that becomes more and more complex with increasing speed. Advanced Driver Assistance Systems (ADAS) like Traffic Jam Assists can already handle stop-and-go paces quite well. In a couple of years, they will have to be able to deal with highway speeds.

How do engineers react to all these challenges? By creating innovative solutions. By developing high-tech cameras that can recognize moving objects more accurately. By establishing communication between vehicles so that cars further up ahead can warn those behind of upcoming dangers. And by perfecting emergency maneuvers that automated cars will perform in unexpected situations.

However, just one perfect solution for every safety issue won’t be good enough. Even the best components might fail. Which is why autonomous cars will need redundant technologies: if one feature fails, another is ready to step in. Better to be safe than sorry.

Strengthening data protection

Safety concerns of another kind arise from the vast amounts of data that driverless cars will generate as they send and receive the information needed to transport and entertain their passengers. Where there is data, there is the potential for its abuse and this must be resolved before AD could be deemed “safe”.

No chance for big brother: cars must be safe from spying
No chance for big brother: cars must be safe from spying. (Photo: Fotolia)

Obviously there is the immense technical challenge of making automated cars safe from cyber attacks. But we don’t even have to conjure up a criminal scenario to recognize the challenges at hand. We just need to realize that much of the data will be personal. And personal data is of course like gold dust to insurers, advertisers, investigating authorities and others who stand to gain from a better understanding of people’s behavior.

A supermarket might be keen to analyze customer traffic, for example. More worryingly, a government with a questionable human rights record might be interested to learn about a political opponent’s movements and contacts.

Data protection is therefore vital. Yet all sorts of questions about where it can be stored, who can have access to it, and for how long it can be held still need to be resolved.

Data protection is key
Data protection is key. (Photo: Fotolia)

In 2014, the German auto industry compiled a set of protection principles. These include transparency, the right to control what happens with one’s data, and data security. It’s a good start.

Next, data protection laws, which do not currently cover driverless car technology, must be strengthened. Potential users need to know they will be protected. Also, manufacturers need clarity to avoid confusion, potential fines or even product recalls further down the road. 

Ethics: between a rock and a hard place

Picture the terrible scene. You’re driving along a busy road in your autonomous car, hands off the wheel, when a small child darts out in front of you. This presents a rare but possible scenario (even though autonomous driving for city traffic is not expected before 2035). If AD is to be safe, decisions put in the hands of the car must be ethical. Should it be “programmed” to prioritize your safety or someone else’s safety? You are traveling too fast to stop. A terrible accident is unavoidable. Your car must make a split-second decision and it has three choices. What does it do?

1) Hit the child
2) Swerve right onto the sidewalk and strike an elderly couple
3) Swerve left into the path of an oncoming truck, putting yourself in harm’s way

The driver of a conventional vehicle might be forgiven for whichever unpalatable choice he makes in the panic of the moment. But things will be different for your self-driving car. Its life-and-death decision may not in fact be made at the scene but might have been made by computer programmers several months or years earlier. That throws up all sorts of ethical problems. Assuming your driverless car is programmed first and foremost to protect its occupant, perhaps the lesser of the two remaining evils would be to strike the couple and save the child. But would the elderly pair’s family agree? And if not, would they sue?

They may argue that your vehicle should have been programmed to swerve into the truck instead, putting your own life at risk, but what if your young family was in the car with you? Also, if driverless cars were to be routinely programmed to sacrifice their users to save others, how would people feel about buying them?

The list of questions goes on: traffic regulations might prevent the car to drive onto the sidewalk at all times. But should they do so, even in situations where a sidewalk maneuver might save lives?

Scenarios like the one described here are of course improbable. Also, engineers often argue that future vehicles will anticipate such situations and prevent the vehicle from getting into a moral dilemma in the first place. The car could be programmed to slow down preventively on poorly observable roads. It’s also conceivable that by 2035, children will wear transmitters that send a timely warning to approaching vehicles.

However, it is important that these complex ethical questions are explored and debated now - and that they are done so in the spirit of transparency and honesty.

Customers and investors need to know what to expect. Automakers must explain their reasoning behind algorithms that could take or save lives. 

If you look at the dilemma presented here: which decision do you think the car should take and why? Take part and discuss in the comments!

Comments (0)

Leave a comment:

Comment successfully submitted.
  • Quick Newsletter Registration!

  • Subscribe to our monthly newsletter to receive a roundup of news, blogs and more from 2025AD directly to your mailbox!