In America we have about 130Million cars driving (in 2007) about 30miles per day. Say 1% of them changes to self-driver car, we are talkking about 40 Million miles of self driven per day. One testing of 100KM has not seen many situations which 40miles/day will see. It will certainly take many-many years and lots of improvement cycles before we can say good self-driven cars are ready.
Concept of electic cars was extremely simple, just put battery and a motor to drive the car, still take 15+ years of work (about 10 years from first electric car on road) to come to point where we had limited success and may be to hit a reasonable car like Tesla.
@Bert22306 I do agree that driving is never risk free. However there is human element here who is smart enough to react to the situation. Also there are incident where the situation occurs and accidents do happen. I am just to point out that its just adding risk to it. And its just like any other new technology always benifits comes with some additional risks. The point I was making is that any such failure can lead to mass failures like if some beacon goes out of working order, then car looses its eyes and it wil happen to every car which is passing by there. There will be many such new scenarios which we are not able to think of and need to be figured out once this system will be deployed.
Also even one bad person can lead to accident, however there is always a smartmind behind wheel which can adapt to situation and reacts. In automation world we will be under controll of machines and they are not adaptive and smart enough.
I am not against system, just bit worried about learnings from its initial phases and how robust it will be once we establish it.
i think you have it backwards - the V2V standards setting activity has become a boil the ocean proposition. Commercial solutions exist today and are being blocked by regulators or by too-cautious commercial interests with too much at stake to take the necessary risks. Deploy DSRC technology NOW on commercial vehicles and at dangerous intersections. Leverage and encourage parallel wireless advances. Google is not boiling the ocean - just demonstrating what can be achieved when risks are taken. Let the market decide, not the scientists. Does anyone really trust the government to run this operation anyway? Does anyone WANT the government to run things? It almost guarantees failure and/or consumer rejection.
It is what it is. Ten years is too long to have valuable spectrum set aside with nothing to show for it. The current standards-setting regime has failed. Time to move in. The results speak for themselves. And, yes, I am deadly serious - 100 people killed per day. We have the technology and standards in hand today.
Please don't get me wrong, I am a supporter of automation for vehicles, I just want to get there in the right way. I'd rather see the efforts initially in driver assist where huge progress could be made very quickly. Leave the automated vehicle as a pure research program for now.
With relatively small incremental changes to the auto environment society could make the roads much safer (using V2I).
1. Monitor every vehicle all the time (vehicle recognition and registration, vehicle location, driver recognition, driver monitoring, speed, lights, signals, car spacing etc). The roads are public and we have no right to break any rules while using them.
2. Brake assist, parking assist, route assist (tell the driver which lane to be in) etc
3. Infrastructure based speed control and vehicle start lock. (if the speed limit is 25mph, why is the vehicle ever capable of exceeding this zones speed limit? If the vehicle has the potential to be dangerous due to a fault, why allow it to start?)
With a small and incremental increase in infrastructure costs we could eliminate unregistered vehicles, unsafe vehicles, auto theft, speeding, DUI, and the potential for a huge number of minor infractions.
After a decade or so of reasonable penetration of the above, we'd almost be ready for the next step...more automation.
Some days ago the first Mercedes S-Class drove ~100 km autonomously on the streets from Mannheim to Pforzheim in Germany. The same way Bertha Benz and her sons drove with the Benz Patent-Motorwagen on 5 August 1888 in order to demonstrate the suitability of her husband's construction. So, I believe we are closer to autonomous driving than many people think.
Thank you, Junko, for a brave piece of journalism. It's tough to swim against the do-good DOT current of V2V myopia. The best thing the DOT could do at this point is encourage Google et. al. in their private efforts to develop cars that do not collide with others cars!
#1 - The DOT has been spinning its wheels with nothing to show for 10 years sitting on valuable spectrum without saving a single life or delivering a single monetizable application.
#2 - The key objective of V2V, of course, is not autonomous vehicles but collision avoidance - autonomy is a by-product highlighted by the introduction of the Google car.
#3 - Google's arrival on the scene has changed the game, simultaneously demonstrating the power of private enterprise to stimulate technological progress and market adoption AND the shortcomings of leaving such initiatives - especially technical specifications/definitions/requirements - to governments.
#4 - Trying to specify telecom-related standards divorced from market demand is a strategy with a long history of failure.
#5 - The government should require (not I am not using the word "mandate") that class 6,7,8 trucks, emergency and service/construction vehicles be equipped with beacons to make their presence known to drivers and traffic controllers - yes, using DSRC technology.
#6 - All Bluetooth roadside implementations for measuring traffic flow should be equipped with DSRC as well.
#7 - In this way - and others - DSRC can find its way into a "demand driven" implementation scenario.
#8 - The DOT needs to facilitate, encourage and, fundamentally, get out of the way of progress - which includes keeping its hands and regulations off self-driving car development and progress.
The Google car has a pretty precise 3D map of the area it's driving through. It knows within a few centimeters where it is within the roadway. I don't know if lane markers are part of its map, but it easily could be; it's not much additional data. If that data is there, the car can stay in its lane whether the lanes are marked or not. The google car might be able to handle this situation right now; if not, it would be a small, incremental improvement. So missing lane markers don't seem to make instructions from the roadway necessary.
As you mention, the Google car stores speed limits, but it can handle stop lights. It doesn't seem hard to me to teach it to read speed limit signs. That might seem like hand waving, but OCR has been around for a long time, and speed limit signs are simple. So another thing you present as a stopper seems to require a just small improvement.
Same for the bridge problem. Making sure that the map information that a self-driving car would follow on its trip is updated quickly when a section of road becomes unpassable is an incremental improvement.
I agree that the Google car is not a complete solution today. The drivers of Google cars do occasionally have to correct the car. The article you cite mentions two interventions over 140,000 miles. Each such correction is fuel for the improvement of the algorithm, and they're acquiring additional experience every day. Remember, the best research teams in the world couldn't get as far as eight miles in the desert without crashing, a mere nine years ago.
There are some things that I don't think the Google car can't handle right now: some weather conditions, construction, a collision blocking the road, and an intersection controlled by a traffic cop. I say it's only a matter of time. Not much, though, given how far they've come in so short a time.
There is information that the infrastructure could provide that would be useful to a driver, whether it's a person or a computer. It could tell me how long before the light that I'm approaching turns green. It could tell me if there's a car stopped in my lane around that curve. Providing that information would prevent a few accidents and move traffic along somewhat faster. But self-driving cars can work with no changes to the infrastructure and would still be a vast improvement.
@Yoshida. The shift in focus in the article is understood, but V2V provides only a small subset of the communications required to support a mass of autonomous vehicles. As an incremental step it is worthwhile, but lets not kid ourselves that it leads to full automomy.
Given that there are many years ahead of a hybridised environment (both sensor data and vehicles), the chances of having successful per car autonomy is close to zero IMO. A great example is the amount of effort and computing power thrown at the DARPA challenges; do you really think that scales economically to a large population of vehicles?
Google have done a great job with a technology demonstrator, but the solution is a decade or more from prime time and is cost prohibative. Having the driver "on alert" just in case is dead in the water IMO. When eventually in a prime time state.....
1. Will the same automation controller be used in all vehicles ($20k-150k)?
2. Will there be a mandated standard for the automation responses or will different manufacturers have their own special sauce?
3. What will be the autonomous rule set when an accident is unavoidable?
4. Does the autonomous system allow speeding, swerving etc? Is there even a manual mode?
4. What about the legal nightmares after an accident involving an autonomous system (or worse still two)?
An Infrastructure based control system can more readily set speeds and spacing and has a better chance of not having a cascading accident front when things do go wrong. Once the infrastrucure sets the driving parameters there will be no speeding, running red lights or other traffic infringements, so no need for fast cars etc. (Though I doubt that the cars will actually be appealing then).
Islands of autonomy are in general a very bad idea when mixed in with human responders, and it won't be until automation is mandated for all that it can truly live up to expectations.
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.