MADISON, Wis. -- Parsing out what exactly Google is accomplishing with its self-driving car project isn’t easy. The world awaits as dribs and drabs of information trickle occasionally from Google’s blog tease.
The latest leak, earlier this week, was a blog post by Chris Urmson, director of the Google Car project at Google. It offered a glimpse of how far Google’s self-driving car has come, as it takes driving lessons on the streets of Mountain View, Calif.
The video clip posted on Urmson’s blog also gives a sense of what the self-driving car’s machine vision is actually seeing as it tools along.
But what exactly have we learned? More important, what challenges are still ahead for Google (and the automotive industry as a whole) to move the self-driving car from an R&D project to a real product? We talked to a few industry analysts.
What computer vision sees
One thing that Urmason’s post makes very clear is Google’s ambition. It hopes to take its autonomous cars through every street and every city, in every terrain. Clearly, Google is eager to debunk the conventional assumption: Autonomous cars, most likely, will be deployed for driving on freeways.
Some experts in the industry have speculated that self-driving cars won’t be used for driving regular streets for a long time, since surface streets -- often plagued with unexpected events -- would be too tough for self-driving cars to handle, especially without vehicle-to-vehicle and/or vehicle-to-infrastructure help. The video clip shows otherwise.
Sure, Mountain View is no Mumbai. Street views in Mountain View are rather “antiseptic,” as described by Roger Lanctot, associate director, Strategy Analytics, when compared to any city in India or China, where pedestrian throngs mill constantly among all types of vehicles -- pushcarts, scooters, electric bicycles, you name it.
Still, most impressive about the video clip is how neatly Google Car’s computer vision organizes what it sees on streets into separate, independent boxes, Egil Juliussen, director research, Infotainment & ADAS, at IHS Automotive, explains to us. He says, “Notice how neatly all the cars, bikes and pedestrians are marked on the map the computers see based on all the sensors?” In his view, the car’s computer eye sees objects on the street in a much more orderly fashion than probably “90% of drivers see while driving.”
Strategy Analytics’ Lanctot observes, “You see the Google Car is going rather slowly -- but very cautiously.” It’s learning the subtleties in its path as it moves along. One important thing to remember, he says, is that the Google Car is “a self-contained vehicle.”
In other words, the car doesn’t depend on so-called data in the cloud to drive. The Google self-driving car is driving and “drawing a real-time map” on its own, by using the real-time data it has collected through its on-board sensors, including Velodyne’s lidar system on the rooftop,” Lanctot explains.
They contest it is better in some ways, and I think they may well be right. Humans lack the ability to concentrate for any real length of time, and repetitive tasks ease us into mindlessness. Computers can hold speed and distance with far more accuracy than a human, never gets bored or angry or drunk and can recognise patterns. The patterns are actually there in the highway code.
My one question is this - Would the passengers be liable for any accidents? Would Google be liable? It seems like a mess. I have a good driving record and enjoy pretty cheap insurance rates ($26/month from Insurance Panda.. woohoo!). I also enjoy taking my car out for a spin and enjoying the 'freedom' of being able to drive anywhere. Will the driverless car allow all this? If not, I'll have to pass.
IMO.. Until they can ensure that there are no humans taking control of the wheel, insurance will be needed... at least uninsured motorist. Who knows? Maybe insurance as we know it will go away, replaced by any number of models that would more accurately represent the new risk distribution.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.