How safe is safe enough?
LAS VEGAS — Although honesty, in corporate presentations at the Consumer Electronics Show, is typically not treated as the best policy, Gil Pratt, a former MIT professor who heads the year-old Toyota Research Institute, bucked that trend and leveled with his audience here Wednesday about the real future of autonomously driven robo-cars.
Citing the scientific and technical challenges, even with huge recent advances in artificial intelligence (AI), Pratt said, bluntly, "We are not even close."
Pratt hung this prediction on the issue of passengers’ safety in self-driven cars. He noted that while human beings, tolerant of human error, have come to accept the 35,000 traffic deaths every year in the United States. But, he went on to ask if people could accept even half that number of deaths caused by robotic automobiles.
"Emotionally, we don't think so," said Pratt. "People have zero tolerance for deaths caused by a machine."
Pratt supported his point by reviewing the five levels of autonomy established and recently revised by SAE International, ranging from driver assistance (Level 1) to full automation (Level 5). It's Level 5 that Pratt emphasized as arriving far into the future.
SAE defines Level 5 as “the full-time performance by an Automated Driving System of all aspects of the dynamic driving task under all roadway and environmental conditions that can be managed by a human driver.”
Acknowledging that every carmaker is shooting for Level 5, Pratt said, “None of us is close. Not even close.” He added, “It's going to take many years of machine learning and many, many more miles" of tested.
As for the hopes of tech companies and automakers to achieve the "high automation" of Level 4, Pratt dashed a little more cold water, indicating that research at Toyota Research Institute puts that goal a decade away.
Pratt reminded that SAE defines Level 4 is “the driving mode-specific performance by an Automated Driving System of all aspects of the dynamic driving task, even if a human driver does not respond appropriately to a request to intervene.”
Leading automakers are talking about the rollout of Level 4 cars in 2020. For the immediate future, though, he sees Level 4 as "a fully automated car operating in a specially designed domain" with restrictions for speed, time of day and weather. “This matches up better with car-as-a-service” scenario, Pratt concluded.
In other words, this isn’t the scenario, of cars zipping hither and yon while passengers lounge about the car's interior watching movies, doing work, playing video games, as has been suggested by the more enthusiastic boosters of self-driven vehicles. Just such a utopia was presented later on Wednesday, to a cheering throng at the CES keynote session, by Jen Hsun Huang, CEO of Nvidia.
As for the intermediate levels of autonomous driving, Levels 2 (partial automation) and 3 (conditional automation), in which a "handoff" from the car to a driver is required when driving challenges appear on the road, Pratt was especially dubious.
Because of the driver's cognitive delay in reacting to an alarm issued by the car, called "vigilance decrement" by researchers, the "handoff" is an unreliable way to get control before a possible crash. This delay has been compared to waking a person from a nap.
"The less frequent the handoff," said Pratt, "the more drivers overtrust" the car.
Take the example of a Level 3 car driving at 65 miles per hour – roughly 100 feet per second. With the normal reaction time of a human driver at about 15 seconds, the response to danger won’t come until the car is 1,500 feet farther down the road.
Pratt said that the Level 2 car is already here, but some critics see it as a bad idea, he noted. Under a Level 2 scenario in which “a driver must always monitor autonomy,” he said it’s difficult for the driver to maintain “situational awareness.” The audience at the press conference was asked by Pratt to stare at a big clock on a screen and clap when the clock’s second hand advanced two seconds instead of one.
The test proved that this isn’t hard for a short period time. But when the same test continues more than a minute, mistakes accumulate. People get bored. “It’s the human nature.”
Drivers in Level 2/3 cars can not only overtrust driver assistance technology. They can easily misuse the ADAS features by pushing their limits, for example, Pratt explained.
"Some companies have already decided that the challenge may be too difficult, and they've decided to skip Levels 2 and 3," he added.
Despite Pratt's pessimism about the imminence of the self-driving future, he was enthusiastic about the advances in machine learning, including the work of the Toyota Research Institute, that he believes will make it possible.
Also upbeat was Bob Carter, Toyota's U.S. senior vice president for automotive, who was enthusiastic about "yu-i," Toyota's concept for a robotic car environment that presents itself to the car's passengers as "warm, engaging, friendly, immersive."
Noting that the human/car interface is "an emotional relationship,” he said Toyota's "view of the future should not start with just technology. It should start with the experience of the people who use it, with an interior designed to help the user" through "sight, sound, touch."
He added that the "yu-i" concept, a car that learns more the longer it drives around, is "kinetic. It's exciting. It's immersive. It's just fun."
But, said Carter, anticipating Pratt's caution, "We still have a long way to go."
During the presentation, Pratt said, "We know as wonderful as artificial intelligence is, artificial intelligence systems are inevitably fraud."
Toyoto Research Institute is "taking a two-step approach," said Pratt. Toyota is "simultaneously developing a system called Guardian," which engages robotic aid when needed (like sudden braking), and "an artificial intelligence system called Chauffeur," which is fully engaged whenever the car's autonomous mode is turned on, he explained.
Toyota's Concept-i vehicle, with built-in artificial intelligence, nicknamed Yui, designed to learn from and grow with the driver.
Although Nvidia was much more optimistic about autonomous cars, Toyota’s Chauffeur-Guardian concept is similar to what Nvidia’s Huang called “two types of AI – AI for Auto-pilot and AI for Co-pilot – in his keynote.
Hopeful about the possibilities of these technologies, which apply the machine-learning elements of AI, Pratt said Guardian and Chauffeur can be "more than a helpful friend. They have the potential to be a friend who looks out for you and keeps you safe."
Followings are SAE's J3016 levels and definitions:
- Level 0 – No Automation: The full-time performance by the human driver of all aspects of the dynamic driving task, even when enhanced by warning or intervention systems
- Level 1 – Driver Assistance: The driving mode-specific execution by a driver assistance system of either steering or acceleration/deceleration using information about the driving environment and with the expectation that the human driver performs all remaining aspects of the dynamic driving task
- Level 2 – Partial Automation: The driving mode-specific execution by one or more driver assistance systems of both steering and acceleration/deceleration using information about the driving environment and with the expectation that the human driver performs all remaining aspects of the dynamic driving task
- Level 3 – Conditional Automation: The driving mode-specific performance by an Automated Driving System of all aspects of the dynamic driving task with the expectation that the human driver will respond appropriately to a request to intervene
- Level 4 – High Automation: The driving mode-specific performance by an Automated Driving System of all aspects of the dynamic driving task, even if a human driver does not respond appropriately to a request to intervene
- Level 5 – Full Automation: The full-time performance by an Automated Driving System of all aspects of the dynamic driving task under all roadway and environmental conditions that can be managed by a human driver