MADISON, Wis. – Almost a year ago Elon Musk famously proclaimed: “I really consider autonomous driving a solved problem.”
Given all the advancements of Artificial Intelligence and a rash of announcements about business and technology firms partnering to develop robo-cars, the self-driving promise seems self-evident.
Tech companies and carmakers are sticking to a self-imposed deadline to roll out sometime between 2019 and 2021 their first Level 4/Level 5 autonomous cars. Nobody is publicly backpedaling — at least not yet.
The business and investment community understands — and encourages — these business aspirations for autonomous vehicles.
Under the hood, though, the engineering community is staring at multiple problems for which they don’t yet have technological solutions.
At the recent Massachusetts Institute of Technology-hosted event called the “Brains, Minds and Machines Seminar” series, Amnon Shashua, co-founder and CTO of Mobileye, spoke bluntly: “When people are talking about autonomous cars being just around the corner, they don’t know what they are talking about.”
But Shashua is no pessimist. As a business executive, Shashua said, “We are not waiting for scientific revolution, which could take 50 years. We are only waiting for technological revolution.”
Open questions Given these parameters, what open questions still need a technological revolution to be answered?
Consumers have already seen pod cars scooting around Mountain View, Calif. An Uber car — in autonomous driving mode — recently collided with a left-turning SUV driven by a human in Arizona.
It’s time to separate the “science-project” (as Shashua calls it) robotic car — doing a YouTube demo on a quiet street — from the commercially viable autonomous vehicle that carmakers need but don’t have.
As EE Times listened to Mobileye’s CTO, as well as several scholars, numerous industry analysts and an entrepreneur working on “perception” in robo-cars, the list of “open issues” hobbling the autonomous vehicle industry has gotten longer.
Some issues are closely related, but in broad strokes, we can squeeze them into five bins: 1) autonomous cars’ driving behavior (negotiating in dense traffic), 2) more specific and deeper “reinforcement” for learning and edge cases, 3) testing and validation (can we verify safety on AI-driven cars?), 4) security and anti-tampering (preventing a driverless car from getting hacked), and 5) the more philosophical but important question of “how good is good enough” (because autonomous cars won’t be perfect).