Waymo’s self-driving car crash: One in 7 million
Hello, automated driving community! Waymo gets hit, Lyft is on the lookout and driverless cars are virtually everywhere: we bring you this week’s key stories from the world of automated driving!
For about two months, Uber – Waymo’s ride-hailing rival – had been the bogeyman of the business: denounced for testing technology in public too early, scrutinized for neglecting safety. Now, it’s Waymo that finds itself at the center of unwanted attention.
On Friday, one of the firm’s self-driving Chryslers was involved in a collision in Chandler, Arizona. A Honda had ran a red light – and then swerved to avoid hitting another vehicle, ending up striking into Waymo’s minivan. The SUV was traveling in AD mode at the time of the crash, with its safety driver suffering minor injuries. While police cited the Honda driver for traffic violation, Waymo was cleared of any charges: “The Waymo vehicle was at the wrong place at the wrong time,” Seth Tyler, a Chandler Police Department spokesperson, shared with WIRED.
With the increasing deployment of driverless vehicles, there’s a rising (statistical) likelihood for them to get hit – an inconvenient truth we’ll have to come to terms with. But still, the overall number of crashes will drop because of self-driving cars – at least that’s the promise the technology and the industry will be held to.
At the same time, society will develop a more realistic sense of proportion: the U.S. reports about 7 million crashes per year, most of them caused by human error and with manually-driven cars. This number translates to roughly 19,000 collisions a day (without public uproar or scrutiny) – in one of which Waymo’s van was involved.
But there’s more to it: in Waymo’s case, what did the sensors register, how did the software decide to react (or was it even able to)? Would a human driver have been able to avoid it? Could the software have been able to preview this erratic behavior of the other driver? (I remember seeing a presentation of then Google’s Chris Urmson in 2015, that its software should be able to deal with totally weird situations – I think they showed a wheelchair driving in circles around a duck in the middle of the road). Cars still should know how to handle this).
Overall, letting computers take the wheel will chiefly reduce and mitigate accidents – even though it still might take many years to get crash rates anywhere near the “vision zero”. In any case, Friday’s crash hasn’t exactly put a severe dent in that promise.
Draft day for driverless autodidacts
If you’re an engineer, chances are you’ve followed the classic career path: from exams and graduation to entering a tech company and working your way up. This development will continue to offer a promising avenue into the industry – and yet, two firms are turning it upside down.
Ride-hailing company Lyft and webucation startup Udacity have formed a partnership to identify the brightest minds in Udacity’s Self-Driving Car Engineering Nanodegree Program – and hire them for Lyft’s Level 5 automated driving tech team.
The Lyft Perception Challenge asks candidates to develop perception algorithms capable of recognizing vehicles in simulated urban surroundings. The competition runs until June 1 – and the top 25 candidates will earn a job interview at Lyft. “We recognize that conventional recruiting strategies no longer suffice,” Lyft writes on its blog. “What is needed is a future-facing hiring model as transformative as the field they’re hiring for.”
The two tech firms are rewriting the recruitment rulebook – and their disruptive initiative is remarkable in two ways. First off, Lyft and Udacity are smartly marketing AD as an up-and-coming, software-driven key industry for unorthodox go-getters looking for a lateral job entry. Second, they’re tackling the industry’s gaping skills shortage. The AD business is short of specialists in 49 different occupations, a study by Workforce Intelligence Network found. Amid the “war for talent”, it seems logical for Lyft to make a career in engineering more accessible to the self-starting, self-learning AD autodidacts. After all, how does not holding an MIT degree in engineering matter if you’re capable of creating next-level algorithms?
It will be interesting to see how the self-taught Nanodegree geniuses compete with merited engineers by trade in the long run. If the newbies stand the test, Lyft’s recruitment model just might too.
Virtual testing: A true potential?
Speaking of accessibility: using 3D simulations to test self-drive cars may not be a new idea, but Parallel Domain may be the first firm building a business model around it.
The company will generate virtual worlds for autonomous cars to test in. Based on real-world map data, algorithms and generative models, any simulated content can be programmed at the push of a button – from the number of lanes and cars to weather and terrain type. So, while the autonomous car doesn’t move physically, its A.I.-powered computer brain can roam the 3D environments to test and train traffic scenarios. “It’s essentially like making a big video game for a car to drive through,” founder Kevin McNamara shared with TechCrunch.
In the wake of two fatal driverless car crashes, Parallel Domain’s virtual worlds offer a promising (and less risky) way to let AD tech mature. The idea behind it: not only does making a mistake in a simulation go without consequences – it still provides a great deal of learnings. Even more importantly, the approach could help make safe testing more accessible: with real-world testing still on the expensive side, smaller firms in particular could avail of “starter packs” of simulated content before putting advanced technology on public roads at a later stage. The concept could help accelerate the development of safe automated cars – by push-starting the democratization of safety.
So long, drive safely (until cars are driverless),