I agree with kfield. The need for a "self-driving car" is to provide comfort as like a Chauffeur Driven car. But if the user needs to be alert to take over the "auto-driver" in a crisis situation, then the entire purpose of providing "comfort" is defeated. I cannot think about such a car being useful on the roads in India, where there could be crisis situations in most of the journey time!! And after all these, if the "auto-driver" system freaks out,...it would be a nightmare. After all, not everyone could afford a "Batman's car". :)
What makes us humans believe that we can come up with a solution -- just in the 10-second warning period -- for problems autonomous cars couldn't figure out how to solve.
But this sill be a nagging issue -- because it directly relates to "who's fault is this" question. Drivers and insurance companies will continue to argue, and I am pretty sure that carmakers don't want to be held responsible.
@CBDunkerson, thanks for your detailed response. Fascinating. You have obviously given a lot of thoughts on this.
how does the car differentiate between a traffic cop directing it to go (despite the red light) and some random joker doing the same thing? That could be handled by giving cops and construction workers special signalling devices for autonomous cars, but it is an issue that needs to be worked out.
Yes, that's something I have never heard other people talking about.
Given that the ultimate goal is to allow these cars to drive themselves without any human occupant at all, the question isn't really how does the car hand over control, but rather how does it handle 'exceptions'.
I'd argue that the vast majority of exceptions would be handled automatically by safety and route finding logic. For example, software directs the car to take a road, but sensors detect an obstruction blocking that road... safety logic would automatically prevent the car from hitting the obstruction and route guidance would then calculate an alternate route.
One area which needs further definition is 'illegal' activity. There are tons of motor vehicle laws on the books which humans violate all the time. For example, if you are on a one way road and there is an obstruction blocking the street (e.g. downed tree limb) a human will illegally back up to the last intersection and go a different way. Autonomous vehicles are going to need rules for when they are allowed to break the law... which will probably mean codifying the exceptions which humans have been using all along. However, this is more of a legal issue than a limitation of autonomous driving technology... they CAN handle these situations once we codify what they are allowed to do.
There isn't really a lot left once safety, route, and legal exceptions have been considered. One possibility might be human interaction... how does the car differentiate between a traffic cop directing it to go (despite the red light) and some random joker doing the same thing? That could be handled by giving cops and construction workers special signalling devices for autonomous cars, but it is an issue that needs to be worked out.
Personally, I suspect that there will be an 'operator' system for the transition from 'driver assist' features currently on the road to 'fully autonomous' vehicles. Just as an operator used to have to manually connect long distance telephone calls you might have autonomous vehicles sending information to a human in a control center who then tells the car what to do.... so the 'operator' would get a video of the cop directing traffic and tell the car when to go. Not driving the car, but just giving it simple instructions (e.g. 'go', 'back up', 'take Halsey street') when it encounters an 'exception' it can't figure out on its own. Again, safety logic is going to keep the car from hitting anything, but it may pull over to the side of the road to wait for instructions if it can't figure out how to get to its destination (e.g. rock slide has blocked the only road going there). Maybe those instructions come from an owner on their cell phone or maybe a professional operator, but they'd only be to prevent autonomous cars from sitting motionless in some safe location... getting to a safe location and stopping would be default behaviour for unknown conditions at all times.
I'm with you, DrQuine. I think that the most difficult task will be the vision algorithms, but not that it's an impossible task.
Back when AI was being much ballyhooed, I asked during a meeting at work why anyone thought it was so different. My non-engineer boss said it's like not being able to distinguish between a human or a machine when interfacing with AI.
I said to him, you don't understand. I don't think humans are that smart or unpredictable. Rule-based programming, with lots of nested if statements, and a few coin tosses thrown in there once in awhile, if you really want to emulate humans.
Planes also land on autpilot. We have Mars rovers that drive autonomously, avoiding obstacles or terrain that's too steep. We have drones that fly themselves and find their way back to base if they lose the remote control signal, and so forth. None of this is outside the realm of doable.
Plus, we have had all manner of computer games that can beat even the most expert of human players, e.g. at complex games like chess.
What makes car driving such an insurmountable feat, given that so many barely proficient humans can more or less master that art?
In the slow motion version of the problem (steady traffic flow), if the Google algorithms cannot make sense of the environmental information due to mud, snow, or sensor failure, it has a difficult problem. I think the nearest analogy is what does the human driver do when mud sprays onto the windshield blinding the driver and the windshield wipers cannot clear it. Somehow the human needs to get engaged in the problem solving process while the vehicle slows down and pulls to the side. In the crisis version of the problem (impending collision), the inability of the algorithm to solve the problem before the crash makes it unlikely that a human brought into the problem at the last instant can find a solution. We're on our way to an accident, the seat belt tensioners are engaged and the air bags are about to blow.
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.