Yes, but David, human pilots apparently did not figure out that the Air France speed sensors were frozen over, even when the plane was at plenty high enough altitude to be able to correct (over the equatorial Atlantic, flight from Rio to Paris a few years ago). Humans can become confused when visibility is bad, confused in ways that the right combination of sensors would not be.
Of course, the more one relies on automation, the more one has to intelligently design redundant sensors. Such as, altimeters that use air pressure and radar, and perhaps even GPS as a final smell test (GPS is not reliable for altitude). Done right, it's hard to believe that human pilots would be safer, in many extreme situations.
This raises another interesting question -- will fully autonomous vehicles obey a driver's command to do 60 mph in a 55 mph zone? If not, this will be point of strong resistance by consumers.
In my definition of "fully autonomous," there would be no human input involved. Fully autonomous is where the human is reading a book. Anything else is assistance, not full automation.
So by this definition, the speed limit would be set to whatever it can be, to make the most efficient use of the roadway. And it would vary depending on road conditions and other traffic. For example, one consideration will be to time traffic through intersection points, to minimize the need for stopping.
When people decide to speed a little, figuring the cops will allow some slop, they only do this with the often false notion that it will get them to their destination faster. A proper autonomous vehicle scenario would instead set the speed with knowledge of what's ahead, and with much better certainty that the speed selection is going to get you to the destination faster..
@AZskibum - amen to that. Just a couple of days ago I watched an "Air Crash Investigations" (probably my favourite TV program) about a Boeing 737 that was doing an ILS approach with a known faulty radio altimeter. What the pilots did not know was that THAT radio altimeter was the one used by the ILS. So just off the end of the runway, the ILS thought it was on the ground so cut power and the plane stalled (at a few hundred feet up). Most of the pax and crew survived, but it's a mistake that a human pilot alone would not make. They did not have time to correct before the crash. There were, as I remember, 2 radio altimeters, so I wonder why the ILS did not get data from both of them? No excuse with the bountiful I/O of modern MCUs. And also why the ILS was not programmed to warn the pilots if it was getting a reading of -8 ft altitude when it knew it was up in the air, before the final approach? Lots of questions, and lots of things to think about when designing autonomous systems.
zchrish wrote "Right now, human drivers decide the speed limit. Often times that speed limit is different than the posted speed limit."
I assume you meant that human drivers often decide to exceed the posted speed limit, which is something each of us observes every day. This raises another interesting question -- will fully autonomous vehicles obey a driver's command to do 60 mph in a 55 mph zone? If not, this will be point of strong resistance by consumers. We can all say "but people should really obey the speed limits," but the reality is, we prefer to have the option to speed, at least a little bit, when we decide it is safe to do so.
Just a taste of the problem. Right now, human drivers decide the speed limit. Often times that speed limit is different than the posted speed limit. There are many reasons for this but software would need to decipher why in every situation whereas a human can, at an instant, completely (mostly anyway) understand it and react.
Do you really believe that? If humans were so adept, how come they can't even manage something as simple as keeping their eyes on the road, instead of texting while driving? Or, why does ABS improve braking performance, if humans were remotely competent in unusual circumstances (read, panic)?
The simple fact is that in doing most tasks, humans tend to be unpredictable. Bit by bit, automation takes over tasks that humans did previously, and almost every time will do that task better. Certainly true in industry. Humans need to concentrate more on creativity, and design machines that can do the repetitive tasks more safely and efficiently.
Well, maybe, but pedestrians, cyclists, animals (both tame - think horses - and wild) use the roads too. so there will always be some non-computer-controlled devices on the road (except, perhaps for motorways).
No doubt. And how do humans detect these obstacles? Only with one pair of eyes, which only look in one direction, and only in the visible spectrum, with heavy degradation in the rain, fog, and in the dark.
Surely, we can do better these days?
And if someone gets injured there will be a desire to apportion blame.
True enough. Which is one reason why fully autonomous driving is probably still far off. But heavens, people, are we all terrified of systems such as ABS and stability control? These are in fact auto-driver-assistance systems, that keep fallible humans from doing stupid things. They already exist, and have for years now. Parking assitance, lane-wandering assistance, and such, are no more terrifying than ABS.
Junko - I'm thinking of the auto enthusiast underground. A lot of the modifications those folks make to their cars aren't street legal. They're sold and installed "for off road use only", but end up on the public roads anyway.
My guess is that self-driving mod kits will start to show up in a few years in the same places that now advertize the "for off road use only" equipment.
Human behavior of pedestrians and cyclists will be definitely taken into consideration. One of the video clips posted by Google earlier this year showed a Google car, cautiously proceeding on a surface road, carefully avoiding cyclists, etc. As someone on this forum commented then, that Google Car was definitely driving like an old lady! That said, yes, Google is definitely factoring in all those human behavioral elements, I believe.
A Book For All Reasons Bernard Cole1 Comment Robert Oshana's recent book "Software Engineering for Embedded Systems (Newnes/Elsevier)," written and edited with Mark Kraeling, is a 'book for all reasons.' At almost 1,200 pages, it ...