I don't agree Prabhakar...in the long run perhaps...but for now there is no entity, corporation or government that could shell out billions of dollars needed to build the infrastructure so it needs to be done one driver at the time...Kris
In my opinion the additional money a car owner will have to pay for such driver assistance features installed and to be maintained in a car will be much higher than the expenditure to install a common driver assistance infrastruture.
I also agree with the idea that the emphasis should be , putting whatever technology ( Vision /Radar) into the road infrastructure rather than on every car. That will improve the reliability of the whole system because the infrastructure management can be done with much more efficiency than the individual cars. The problems of dirt/uneven painting of lanes can be circumvented by redundant sensors in the infrastructure.
And as @Caleb Craft is saying, the infrastructure can show a driver the "big picture"
Vision technology has advanced enough to reliability detect lanes and to identify objects in front. Challenge is if the paint to separate lanes are consistently there.
I agree with the assessment. I do believe driver assistance shall be an hybrid of both vision and radar simply because measuring distance is not quite reliable using vision based system. Rather, radar is doing a good job.
With the ADAS, who's responsible to an accident in case it happens? ADAS can be giving out warning signal. The liability becomes obvious. The challenge is when ADAS is actually doing part of the job.
I know this is jumping way off topic, but I can imagine that ideally the sensors your car would be using wouldn't necessarily be mounted ON the car. Once we could have the infrastructure updated it could be pulling from aerial cameras and other stationary devices that see the big picture. A pileup a mile in the distance could be registered and compensated for long before your bumper mounted radar could do anything.
I know it is just a fantasy, but just imagine if we could implement control systems like this one that controlls these quadcopters into our roadways!
@JeffBier, thanks for the URL! Thinking of a vision system as a "software-defined sensor" is an intriguing idea. As more intelligence and smarts is getting integrated into the vision system, that seems to be the trend...
In fact, I do understand that this is not an either or question. And yet, in talking about this with several participants at the conference here, I realized that there are many different shades of radars and vision technologies.
Carmakers can choose to use vision sensors integrated with more smarts and intelligence while adopting a lighter version of radar system. Or, they can pay more for the heavier- duty radar system and add a much more straightforward image sensor (sans too much intelligence). There seems to be a growing options for carmakers.
Another factor in favor of using vision in these applications is that a vision system can be thought of as a "software-defined sensor", which can be adapted to mulitple purposes. For example, Mercedes is using a camera and embedded vision system to scan the road surface and adjust the car's suspension in real time for each bump in the road, resulting in a dramatic improvement in ride comfort. See http://bit.ly/LUvH42 for a review of this technology.
For those who want to learn how such systems are built, there are still a few seats available at the Embedded Vision Summit on October 2nd in the Boston area, where we'll have a full day of presentations and demos on embedded vision applications, algorithms, design techniques, and technology. See http://bit.ly/1d3xTrK for details.
NASA's Orion Flight Software Production Systems Manager Darrel G. Raines joins Planet Analog Editor Steve Taranovich and Embedded.com Editor Max Maxfield to talk about embedded flight software used in Orion Spacecraft, part of NASA's Mars mission. Live radio show and live chat. Get your questions ready.
Brought to you by