Just like what the author quoted the industry expert's own expereince, the algorithm seemed working fine on freeway but failed a lot off freeway. NO surprises here, environments on freeway are more predictable so the "human developed" software would handle better. Off freeway, the environment becomes un-predictable and random. The task may be simply too large for any state-of-the-art processor to handle and too complex for any "algorithm developer" to consider them all. Yes, we can condition the user not to rely on the "smart" installed on car when off freeway. But, it might be a "marketing no no". We can also make incremental improvement over time when the processor becomes more powerful and able to extract more intelligence timely. If so, based upon my years' obervation of technological advancement, the technology could always become a niche play and the "potential market" would prove to be illusive and never come to fruits.
I am posting this message to ask a dumb question to my fellow engineers, are we barking the wrong tree? Should we change the paradigm, instead of making the car smarter, we would make the road smarter? Let the car be a dumb receiver and follow the instruction from the processors monitoring road/traffic condition. Leave the heavy lifting to those processors at fixed locations along the road. Of course, we would still install some "smart" on car to detect those functions dealing with environment more local, such as "child behind the car during backing off", "a car is approaching near you". What do you think?
Vision processing would be a key part of any autonomous driving systems in the future. I think TI may not be the only one in the race for a such a huge volume requirements in the future. Invention and Freescale also will be working on latest innovations to play a major role.
Hi Junko. I'm not sure what you mean by "all the automotive chips", but they have many automotive qualified devices. Mostly analog and mixed signal. I don't think their automotive market share is very large.
This may be a minor detail, but when I was chatting with Jeff Bier at BDTI, he mentioned that a RISC core inside the vision hardware accelerator EVE (integrated inside this SoC) is not ARM, but TI's proprietary RISC core.
No doubt, this is a hot field that's growing fast, I think.
Until several years ago, a lot of machine vision stuff was done on a sheer compute power basis. Purpose-built ADAS SoCs by comanies like Cognive, TI, Freescale and ST will definitely change the landscape for automotive vision.
"Further, 93 percent of traffic accidents in the United States are estimated to be due to human error."
I would have thought that number to be closer to 100 percent. I wonder, for example, if that stat includes less than ideal response to a sudden contingency, where a more expert, or perhaps automated reponse, could have avoided an accident. Things like skidding on ice, sudden tire failure, that sort of thing, where the blame is usually put on the mechanical problem rather than the response.
Very timely article. And if these vision systems are going to be a major component of the V2I solution, scanning signs and so on, there's going to be even more demand on the algorithms. Pretty exciting stuff, I'd say.
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.