3D vs. 2D: A whole new dimension?
3D imaging is often said to be a prerequisite of autonomous driving. And yet, Intel has recently acquired Mobileye, a specialist in 2D camera systems. Is this the technology of yesterday – or a crucial step in order to keep the huge amounts of data involved in autonomous driving manageable?
"Data is the new oil" – nowhere is this maxim of the 21st century more fitting than in Israel. Surrounded by large oil-producing nations, the small country has secretly developed into a high-tech location for data services. When it comes to the car industry, hardly anyone can avoid Mobileye. "We are in pole position to play a decisive role in autonomous driving," explains Amnon Shashua, confidently. The IT-expert cofounded the Israeli start-up in 1999. Today, the camera systems specialist works with 27 car manufacturers worldwide.
One company that is no longer on the list of Mobileye’s preferred partners is Tesla. Following a fatal accident involving Tesla's Autopilot function, the two founder-led companies fell out. Mobileye had supplied the camera system for the Model S, which failed to detect a white truck that crossed the path of the Tesla. This is perhaps no surprise, given that the EyeQ3 chip with a mono-camera under the front screen was not designed to do so.
Nvidia’s approach: drastically increasing computing power
For Elon Musk, the development of highly automated driving is not moving fast enough. Tesla fans are currently eagerly awaiting Autopilot 2.0, which is supposed to further improve the performance of many automated functions. It is based on a new high-performance platform from Nvidia in the form of a small tablet. The multi-thousand-euro PX2 computing platform has 40-times the processing power of previous systems and can process data from eight cameras alongside radar and ultrasound sensors. It represents one of two schools of thought when it comes to the technology behind automated driving: multiplying computing power to deal with the ever-increasing amount of sensor data.
It is with the help of the aforementioned sensor set that Tesla boss Elon Musk wants to drive from Los Angeles to New York at the end of the year, fully autonomously - regardless of the fact that current legislation in Europe and the USA only permits driver assistance systems (Level 2). For example, only recently the German Parliament voted for the approval of highly automated driving in Germany. A decision made despite an unusually short consultation period and many unresolved questions surrounding the draft law. It means that vehicles will soon be able to temporarily, yet completely, take over the driving (Level 3). Once road traffic regulations have been amended accordingly, drivers will only be required to monitor the system, similar to an airline pilot. This is the official “go” for a new race involving car manufacturers and tech companies. Almost all of them want to put robot cars on the roads this year.
By using the PX2 platform, Tesla can already meet many requirements for Level 3 vehicles. What the Californians are lacking is a well-engineered safety concept with fail-safe functions: "For Level 3, not to mention Levels 4 and 5, we need redundancy in the electrical steering, the power supply and, of course, the brakes," says Elmar Frickenstein, Head of Autonomous Driving at BMW. "We also need 26 different sensors – including camera, radar and LiDAR." For now, Tesla's Autopilot 2.0 will remain a Level 2 system, albeit a very comfortable version, which German manufacturers also hope to bring to the market with new functions next year.
Mobileye’s approach: keeping the data amount manageable
According to announcements from the companies themselves, Tesla, Uber and Lyft are leading when it comes to robot cars. Amnon Shashua considers such self-promotion too simplistic. Shashua represents the other school of thought on driverless car technology: collecting only as much data as necessary – to keep the amount of data manageable. "There are two different approaches: the first is simple to implement, but is the wrong approach. And the second is Mobileye's approach," said Shashua at the recent Bosch Technology World in Berlin. Shashua is against all those who want to realize autonomous driving by simply using lavish amounts of data: "This is what Silicon Valley is doing: rapid prototyping. Using the 3D approach, new companies can generate a nice amount of data within six months, even though they only have ten engineers," the 56-year-old mathematics professor jokes.
Shashua leaves no doubt that such "show effects" with expensive laser scanners lead to a dead-end. Even Google’s (Waymo) test cars only use a camera to detect traffic lights – everything else happens in 3D. The prerequisite is always an HD (High-Definition) map to locate the car. "Through this map you know where the lanes are without having to see them." However, such HD maps are very expensive and therefore cannot be scaled up to every road on the globe in the long-term. 3D-mapped motorways are isolated solutions that can hardly justify the high price of the technology: "Autonomous driving will not take off if it is too expensive or has no real economic benefits."
2D reduces the calculation effort
Mobileye is therefore continuing to rely on relatively inexpensive camera systems as the basis for automated driving. "The camera is the only sensor that sees the road and all objects. It can recognize both outlines and textures on the road," Shashua explains. Furthermore, using triangulation they can determine the position of the vehicle between two precisely measured waypoints. All this can still be done on a relatively simple 2D level, to which radar and ultrasound data can also easily be added. Therefore, at this level, you can still get by with a manageable calculation effort.
Additionally, such data can then easily be projected onto HD cards without any transfer errors. This is good news for all car-makers who do not want to adopt a costly teraflop calculation platform as standard in their cars.
In any case, car-makers have their hands full when it comes to integrating artificial intelligence in road planning. "Driving is a multi-agent game. But the other agents out there are not autonomous vehicles; they don’t necessarily cooperate. They are people, they are aggressive and they make mistakes,” Shashua says. At the Berlin Congress he showed a drone video of double lane changes on congested motorway entrances and exits for entertainment. "Road planning needs a special form of artificial intelligence. Because every time I change my ‘driving policy’, it has an effect on the environment – and I have to collect all the data again."
Algorithms need to understand the context
Autonomous driving is therefore more than simply gathering large amounts of sensor data in an environment model. This is what existing assistance systems have also been doing on a smaller scale. More essential is a superior understanding of situation and context, so as to be able to make a decision in rapidly changing circumstances. Mobileye has already provided a convincing price-performance ratio, with lean real-time algorithms. This know-how is clearly worth a lot of money to Intel. With each new level of automation, the system must be able to think further ahead and master increasingly complex situations alone. "It has to interpret what it sees – and predict where it can go. We call it increased awareness. This is where artificial intelligence can be found in urban traffic."
Shashua calls this level of difficulty a "killer", in which there are still many questions open. Until highly automated driving with Level 3 ("hands-off") works in cities, it will be possible to increase comfort on the motorway with extended and advanced Level 2 systems. Perhaps this is what Elon Musk means with his "fully autonomous journey from Los Angeles to New York".
2D or 3D - what do you think is the most promising approach for driverless cars to prevail? Share your thoughts in the comment section!