Hi, I’m an autonomous vehicle… (in case you didn’t notice)
Hello, automated driving community! Drive.ai to deploy some very vibrant vans and the mist starts to clear on the Uber crash: we bring you this week’s key stories from the world of automated driving!
Mountain View, California-based startup Drive.ai has just announced that it will launch a pilot program to bring an on-demand self-driving car service to Frisco, Texas beginning in July 2018. The project bears all the usual hallmarks of a pilot: for the first six months, just four vehicles will operate in a confined, geofenced area comprised of retail, entertainment and office space; it will begin with fixed pickup and drop-off locations; a safety driver will be present; it will be free…and so on.
But one thing separates this from any I’ve seen before. According to the press release: “the vehicles will be painted a highly-visible orange, and they will feature four external screens that communicate the vehicles’ intended actions to pedestrians and other drivers on the roads.”
As you can see from the image, the rather boxy, bright orange Nissan NV200 vans stick out like a sore thumb. But that’s the idea, as Drive.ai co-founder and CEO, Sameep Tandon told Quartz: “It’s intended to be visually distinct. If you think of a school bus, you know when you’re around a school bus, it’s a really bad idea to say, harass it, or to do aggressive maneuvers around it,” he said.
As for the screens, it seems Drive.ai is making good on a promise it made some time ago when it highlighted the importance of good communication between vehicles and other road users – and I’m glad to see it. After all, human-machine interaction is not limited to occupant and machine. Just think about it: how many times have you safely crossed a road as a pedestrian by making eye-contact with the human driver in a “I’m going to cross now” sort of way (and maybe getting a friendly wave in response)?
“We are taking that human out,” Bijit Halder, vice president of product at Drive.ai. told Popular Science. “But how do we substitute that same emotional connection and communication and comfort?” Via the signs it seems. While the messages are not yet finalized they are thought to include notifications like “passengers entering/exiting,” “pulling over” and “self-driving off”.
Once again, I find myself distilling this news down to one thing: transparency. People have to be made aware of when, where and how autonomous test vehicles are operating. With these vans, that shouldn’t be a problem. And anyway, right now, autonomous vehicles don’t need to be sleek or blend seamlessly into the crowd on public roads. They need to gain trust. And transparency is the road to trust.
Some (false) positive news for the industry?
In a nutshell, the report says that Uber has determined that the car’s sensors did in fact detect the pedestrian but decided it didn’t need to react straight away, i.e. the finger of blame is pointing to a software problem rather than a hardware one.
It’s all to do with how the software was tuned. As with all autonomous vehicle systems, Uber’s software is programmed to essentially ignore certain objects in its path at times when they don’t pose a problem for the vehicle. Such scenarios are termed “false positives” and one example might be when a plastic bag floats over a road. The recent report suggests that the tuning in this case “went too far”, resulting in the car not reacting fast enough. Add to that a safety driver who was not paying attention and you have a lethal mix.
But why set this dial to anything but zero? One word: comfort. If the car’s system reacts to absolutely everything without question, the result is a rather jerky ride with brakes being slammed and hard stops (to the point where you wouldn’t even say it was functional). It’s obvious that this too poses a danger on the roads.
As Uber continues to actively cooperate with the National Transportation Safety Board in its comprehensive investigation, this finding (preliminary though it is) might just let others in the industry breathe again. On the one hand, some parties are vindicated: Nvidia can assume it wasn’t the GPU, Velodyne that it wasn’t the LiDAR and Mobileye that it wasn’t its technology. But on the other hand, this only stresses again the key role that software is playing in automated driving and puts a focus on the tuning used to deal with false positives.
It’s a software conundrum that needs to be solved: balancing safety and comfort. Only then can two of automated driving’s most important promises be met.
So long, drive safely (until cars are driverless),