Unfortunately (IMO) the SDC agenda is being driven by a lot of vested interest groups and individuals who stand to garner money, market position, influence etc by creating the islands of automation solutions beyond Level 3/ADAS.
While many support V2V communications in their autonomous plans, the fact is that the communications latency and size of the vehicle mesh quickly becomes unmangeable. If using V2V only then the control horizon is limited; and V2I followed by I2V has some problems to solve position and speed problems quickly, which is why IMO the lgoic should center in the infrastructure. Infrastructure based control would allow broadcasts messages from a central controller (at least central within a much larger horizon of vehicles). Some might raise the possibility of loss of infrastructure control (some massive failure), but this now becomes a simple "orderly stop" response for all vehicles at an ADAS level.
In large part the current situation can be considered a failing of the govenrment bodies failing to take responsibility and provide the correct longterm solution strategies and direction. The lack of drive from the authorities will I fear make implementation of the correct strategies take decades longer than it might otherwise take.
"When was the last time you had a confused PC?" I'd have to say, this morning.
Pretty much the rest of what you're saying, JC, I see as being right on target.
Right now, we spend huge amounts of money engineering the road systems based on the limitations of humans and current cars. It's perfectly logical to start engineering road systems for autonomous vehicles.
@Davide. I think you are being overly optimistic to consider the current research platforms such as Google Car to be approaching anything like AI. An AI would learn, adapt and change its responses based on experience. The current flock of attempts are nowhere near this level of power.
In a fully automatic (Level 4) vehicle by current definition, the driver is not needed, so all the sensors to establish the state of the human are unnecessary. Consider that one of the projected hot applications of the automated car is taxi cabs; why would the human in a cab be expected to take control if the automation reached its limit?? In a Level 3 vehicle you would potentially need to pass control back to the human, and here it really just becomes ADAS, with no potential for AI involved at all.
"Full autonomy still requires constant supervision." ...if this is true we are dead in the water for Level 4.
"There is no system that can yet match a human driver's ability to respond to the unexpected." ...so not true within the limits of the current automation. Watch Audi do Pikes Peak at the limit of the tires!
" The last thing we want to do is leave a confused car in control. " ...we are talking computer control here aren't we. When was the last time you had a confused PC? An AI or any advanced logic system either has a solution or not; it's an automaton. Next you'll have an angry car?
I hold grave doubts for the island of automation (individual automated cars) solutions to achieve our societal goals and would dearly like the logic to be in the infrastructure so we only need to deliver ADAS/Level3 automation in the vehicle. One set of logic governing all the (autonomous) cars (one ring to bind them all if you like a magic metaphor).
We can save immense amounts of investment by providing coverage in the infrastructure for our highest occupancy freeways and roads. Even the Google car today is not able to cope with some of the merging and ramp complexities, and infrastructure sensors can be designed to have an excellent view of these situations. It makes no sense IMO to design automation with the severely limited horizon of the individual cars sensors when you can get a view of hundreds of cars from the freeway infrastructure.
Let's have a sensible discussion about computing power in automated vehicles and not make it science fantasy.
I have to agree with Frank Tu on this one. Very few consumers will be willing to pay for expensive electronics that permit their autonomous car to drive only on a small number of roads. And what roads are we talking about anyway -- new ones, or existing ones from which human-controlled cars will be banned? Either way, that's a tough sell politically & economically.
If autonomous cars are ever to reach the marketplace, it seems clear to me that they must share the same roads with error-prone human drivers -- and be designed accordingly to deal with those unpredictable humans.
@sanjib. Good question. However, as any ADAS technology promoters would tell us, this is about a new ADAS-equipped self-driving car being able to detect objects and pedestrians around the car. In other words, your non-self-driving cars will be, in theory, "watched" better by those self-driving cars.
@sheetal, I don't think you are alone asking that question. And the automobile industry needs to come up with better answer than...say, a generic answer like, "safety." Because clearly, there are a lot of saftey issues carmakers themselves are wading through right now for autonomous cars.
Are these cars going to co-exist with the non-intelligent cars (cars of today)? I assume that might not be easy isn't it? Because not all of the present cars would be capable of communicating to the automatic cars. What is the plan for that?
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.