Bert: Well said, my friend. I'm sure that someday it may be possible to have self-driving cars, although right now most cities can't seem to fill the potholes much less build new roads. I wonder how long this transition will take...?
I recall hearing serious talk of self-driving cars for the first time at the NY Worlds' Fair in 1964 -- almost half a century ago -- at the GM exhibit there. They gave us the impression we'd all have them by the turn of the century. I'm guessing I may never see them in common operation.
How long do you think it will be before we see millions of self-driving cars in operation? And how long will it take to switch from them being mixed with human-piloted cars to the day they own the road themselves?
Sure. For one thing, a self driving car will be infused with sensors, because it is imperative that the car know its own health at all times. I'm talking about these cars using the roadways efficiently, not just a half-a**ed approach to self driving. So this car will sense the fire starting faster than a human would, and either put it out, or take action to stop the car immediately.
But also, why assume that a human driver can stop a car faster than this algorithm-driven car? Humans do all sorts of irrational thing when they panic. We already know that automatic controls do many many jobs much faster and more accurately than humans can, so why wouldn't that also apply to emergency stops?
Technology seems to evolve this way all the time. During a transitional phase, that comfortable well-known alternative is kept as backup (like sails were retained on steam ships, and hand cranks were retained on cars with electric starters). Pretty soon, though, those old standbys have to go. You can't even use a hand crank when the compression ratio is higher than 5:1 or so. Imagine having to live with that restriction. So the old standby becomes liability. That's what will eventually happen with human drivers.
Bert: I think the reason folks are worried about giving up control to the machines is because there's no assurance they will be better drivers than the humans -- and you bring up an interesting example with the thought about a human who needs to urgently exit the car.
Let's say there's a fire inside the passenger compartment -- we've had some horrid fatal incidents of that recently in the SF Bay Area. A human would draw on all mental capability to stop and get out of traffic very, very fast. But giving a command to a computer to stop the car might not bring the same sort of panic stop. I'm guessing the car would follow normal protocol of signally surrounding vehicles and reducing speed gradually. I suppose you could build a panic button into the dash, but that raises all sorts of other questions....
I'm a long time science fiction fan, and the notion is an old one in the genre. The usual expression is that you get in the car, tell it where you want to go, and the car takes over from there. You sit back and watch a movie or something. In general, you aren't driving, The car is, You're simply a passenger.
Achieving that state will not only require sophistication in the car, but a network infrastructure the car connects to. It will be the network's job to decide how to get to the location I want to get to, picking the best route, and managing the traffic in the process. The process would be analogous to TCP-IP packet routing, where the exact path a packet will take to a destination is not specified, and may vary from packet to packet depending on conditions in the network
While manual control fallback would likely be an option, all concerned would want to make it as unlikely as possible that it would ever need to be used.
Years ago now, I had the interesting experience of driving a submarine control system emulator (designed for training crews). This was a new device at the time. A real eye opener.
Unless you've driven a sub manually, you may not realize just how tight the allowable depth tolerance is. If you're moving fast, just a little too close to the surface, and the sub is likely to broach (break the surface) unexpectedly. Conversely, just a little too much down plane, and at high speed you'd surpised how fast you get to crush depth. It's not like these things can be turned around on a dime. It takes lots of training and skill to do this safely, and one would only change depth at low speeds.
Now turn on the automatic controls. No problem at all. You can go at flank speed, command max depth, and it'll go there with no apparent strain. Go fast at periscope depth, also no problem with broaching. It all seems so easy, yet it's close to impossible to do with humans at the controls.
Ditto with controls for those experimental high performance foward-swept wing jets. A human can't control these airplanes, except with control system software assistance.
The handover mechanisms need to be carefully worked out. In the simpler cases, i.e. where the cars are driving alongside manually driven cars, it seems easy enough. Just like cruise control. The driver can take over just by grabbing the wheel. Since humans are driving out there, the margins have to be very wide. No problem.
Seems to me that in the more interesting applications, like on roadways dedicated to self driving cars, you wouldn't want to allow a human to take over. These automatic control systems invariably bring the technoloy to where humans are incapable of safely operating the machinery. So in this case, if the human needs to get out urgently, you need to command this to the car. And the car would then need to coordinate that operation with the other traffic.
Not sure why so many seem to assume that human control is the ultimate safety feature. There are countless control systems out there where any type of human in the loop would be devastating. Maybe, at best, humans might intervene through special software control. It's not so inconceivable that driving will eventually reach that same place.
"Moreover, it was connected wirelessly to the Internet, giving it access to a vast cloud-based set of data that could be matched to what the local sensors were seeing."
It makes sense. It's very doubtful that local in-car sensors can be counted upon to know that there is road work at that intersection 5 miles down the orad, and therefore you should use a detour. Or that even that the lane next to you, which appears empty, is only empty because a mile or so down the road it is blocked off for some reason. things that people might find out from a radio report or the morning news, or other such source.
Self driving cars can't constantly be blind-sided by things that their on-board radar/lidar/what have you, would have a tough time knowing or predicting.
I suppose they can mingle with manually driven cars, although that takes away a whole lot of potential efficiency. Now you have to leave in big margins to accommodate erratic human behavior.
I think the "handover" issue is something we already have -- one way -- with Cruise Control. There, a tap of the brake or pressing a button on the steering wheel releases control to the driver. But we don't have the reverse, where the car suddenly leaves cruise control on its own, assuming the driver is ready to take over -- that would be dangerous.
So I can't quite imagine a car that is not only controlling the speed, but steering as well. If that released on its own without the driver initiating the handover, chaos would ensue. But, according to this article, cars can't handle all sorts of driving situations on their own yet. Until they can, I think this technology must remain in the testing phase.
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.