2025AD Homepage

What the Berlin #Breitscheidplatz attack means for automated driving

A view at the Kaiser Wilhelm Memorial Church at Breitscheidplatz, Berlin. (Photo: Fotolia / Frank)

Article Publication Meta Data

Stephan Giesler
Stephan Giesler

Article Interactions

4 shares
Rate this article on a scale of 1 to 5.
4.56 16 votes
0 comments
comment
0 views

Automatic braking may have prevented further loss of life in the recent Berlin market attack.

Emerging reports from German media strongly suggest that an Advanced Emergency Braking (AEB) system fitted in the truck that careered into a bustling Berlin Christmas market may have prevented further loss of life.

The terrorist attack on December 19 claimed 12 lives and left many more injured when the Scania R 450 was driven off the road plowing into market stands and, tragically, crowds of people. The truck came to a halt after approximately 70-80 meters (250 feet) following what was described as an ‘erratic’ course.

The latest indications point to the truck’s AEB as the means by which that halt was initiated. Indeed, since 2012, a European Union mandate has required that any new truck exceeding 3,500 kilograms (3.5 tons) be fitted with AEB systems which detect obstacles via radar and cameras and warn the driver to take action. However, if the driver does not react the brakes can act autonomously and may stop the vehicle entirely. The exact braking distance will depend on the speed and mass of the truck: which in the Berlin case was carrying steel beams, nearing its total weight to the maximum of 40 tons (see here for a description of how Scania’s AEB system works). 

Reports of this finding have tread carefully on what is an extremely sensitive issue – and rightly so. In an incident so drenched in sadness, it is difficult to take away any positives. But these recent revelations point to perhaps the only exception: the prevention of further loss of life. Furthermore, it once again brings automated driving into the public eye and in doing so, poses some challenging questions about its future. 

Acceptance of Automated Driving

The first is one of acceptance. Generally, consumer acceptance is one of the key challenges for the future of automated driving. The crash of a Tesla Model S in May 2016 running in “Autopilot” mode certainly didn’t help acceptance. Experts repeatedly pointed out that Tesla’s so-called Autopilot is no fully automated system. However, in the public domain it was generally perceived as “the first fatal accident of a self-driving car".

A demonstration of Scania's AEB system. (Photo: Scania)

The Scania AEB – essentially an automated driving function – now has achieved the contrary and, despite the terroristic intentions of the hijacker, might have saved many more lives from being harmed.

Even though advanced driver assistance systems such as AEB are saving lives every day on our streets already, this was a very dramatic example which will be cited in the future as a strong argument in favor of automated driving functions.

Preventing vehicles from becoming weapons

The second question concerns the vulnerability of any vehicle to be used as a weapon when in the wrong hands. Everybody knew of the dangers of hijacking an airplane, putting all passengers at great risk. On 9/11, the terrible idea to leverage the airplane’s weight, speed and its fuel’s explosiveness to kill even more people on the ground became sad reality. Right afterwards, strong additional security measures were implemented in order to prevent terrorists from entering the cockpit and taking over the airplane.

Now, as terrorists have even published a manual explaining why and how trucks can be used as such a cruelly effective weapon, it’s time to think about strong security measures for trucks. Perhaps a sophisticated “lock the cockpit” solution lies in AD technology itself, e.g. by preventing unauthorized people from driving them (above a certain minimum speed?).

So here’s a question to the engineers and programmers among us: what’s your idea to secure the truck of the (near) future against such a type of abuse - through connectedness, data, biometrics, remote control, artificial intelligence, …? 

On a sideline, this reminds us that preventing the remote hacking of automated vehicles for the same reasons is high priority. 

Machine vs. man: Switching off the off-button

Finally, current and future automated systems simply react to the environment and data picked up by their sensors; they are not aware of context or intentions as such and are so far void of any inherent moral compass. That in itself raises an interesting ethical question: should machines be able to essentially override human will? In the Berlin case, where the driver’s bad intentions were overridden by the system, everyone would argue in favor of that. But what if the driver’s good intentions need to override the system? So far it seems to be universally accepted that, ultimately, the human is in control: “There will always be an off-button”.

But what now? We knew that humans make mistakes and now we’ve been cruelly reminded that, yes, there are humans with murderous intentions. In both cases, the machine can assess better – not the intentions but the consequence – and correct the driver’s decision. So we need a way to detect this wrong decision and in this case shut off the off-button.

Article Interactions

4 shares
4.56 16 votes
Rate this article on a scale of 1 to 5.