I think the article nails the problem right on with this quote: "Drivers being asked to take more driver education courses and stay alert all the time behind the wheel inside a self-driving car, in a way, defeats the whole purpose of autonomous cars." My 90-year-old father recently underwent a driving evaluation (at his children's insistence!) and his skills and reaction time dropped significantly when he was talking versus concentrating solely on driving. I can only imagine how difficult it would be to pay strict enough attention when in an autonomous vehicle to be able to react quickly in an emergency.
I agree with Karen here: the need for adittional user training kind of defeats the objective.
So logic dictates if you are not relying on the user to handle exceptions then the system must be able to. This gets us into use cases, risk analysis and trying to mitigate exceptional events. As you know there are tons of books and consultany available on this.
But if you can't mitigate the system risks to a low enough level (indeed - what level!?!), then it's time to look at controlling the environment instead, e.g. barriers, segregation, automatic speed limiters, and all the other possible safety measures that some drivers see as 'infrigement of personal liberty'.
As I have been saying for years, as a practising engineer I would never get a saftey case approved for a system that involves minimally-trained users piloting multiple-ton vehicles along a stretch of tarmac, with closing speeds of 130mph "This new system proposal - you are telling us that there's NO barrier in the centre??! - Are you kidding!!"
So we are stuck with a dangerous, boiled-frog system that some people abuse for thrill-seeking. Until there's a change in user attitude and a system re-design from ground-up, then the introduction of automation will remain an aspiration.
I don't think the two are parallel. All of that training that they are mentioning is for commercial pilots who are carrying passengers. This would be analgous to a self driving bus, not a car. The testing that a privot pilot goes through, though more rigorous, is analgous to a driver's license.
I don't think we need a commercial pilot's strict training to be handed the reigns of our own personal vehicles.
There is still the issue of slower response time because you aren't already focused on the task of driving, but the same could almost be said for cruise control to a lesser degree.
Several airline accidents have been caused by this situation. Their autopilots do their best, with the pilots too often not monitoring the situation on a continuous basis. When the plane's autopilot says suddenly, 'I'm done, it's all yours!' the result has been deadly.
Also, I think the problem for cars is MUCH worse that for airplanes. The required reaction time are so much shorter.
The article clearly reveals that the implementers (Google et al) have no clue how to build a large scale viable system at all.
Having millions of independent (autonomous islands) car systems built and tested by various manufacturers is a recipe for disaster IMO. They should be looking more to the "Highway in the Sky" efforts, where a centralized (though distributed) system is responsible for object management.
If the infrastructure provides all the management (and route planning) then at least we have only one major system to debug, and any modifications and fixes are rolled out universally. Enablement would be incremental based on highway traffic flow and speeds, and the car side implementation becomes much simpler. On local side streets it may even be that simple "follow the buried cable" implementations would be viable to extend the coverage where speeds are low.
Cars would be driven manually until in range of the enabled road sections when a driver could then enlist into the system. If the infrastructure prompts the driver to take manual control again and they cannot; (asleep, drunk, ill, or simply inattentive) then the system moves the car off the highway and parks. This puts the onus on the government I know, but I think that is where the responsibility for control should rest.
The risks become higher when you mix controlled and manual traffic, so it may be we have to segregate autonomous traffic until every vehicle is capable of being controlled automatically. If speeds and highway density are to go up, then it's certainly time for people to realize that use of the highways is not a personal liberty or right.
I think the first assumption made by carmakers is that the autonomous car will get to the point where the self-driving car needs to handle "exceptions" will be limited.
For many engineers who routinely deal with "exceptions," this is the premise hard to accept.
The second assumption by the automotive industry, however, is a more interesting one. As more and more drivers get used to ADAS (advanced driver assitance systems) in their new cars, they will ebrace a certain level of automation much more readily, and they won't be so averse to the potential riks of autonomous cars.
That is a scenario more likely to pan out, I think.
Some of the latest Porsche 911 versions no longer offer manual transmissions, as Porsche's dual clutch (DPK) automatic transmission is said to be able to perform the task (of shifting gears) much more efficiently and quicker (>order of magnitude) than a human. There is a lot to think about within this above sentence alone and not enough space to discuss them all here. To take the driver out of the 'driving' equation and turn him/her into just an occupant is a monumental task by itself. The major 'drivers' behind this effort is many-fold; including the fact that there are over 30,000 motor vehicle-related deaths per year (89/day, but on the decline) in US, alone (http://tinyurl.com/dcy8qb). Of course, there are other very important reasons for want of human-transport automation, including the environmental effects of traffic jams/accidents and wasted time, as byproducts.
IMHO, there needs to be a separate infrastructure for such 'driverless' vehicles. Putting such topics to the side, I would like to draw a situation here to ponder: On a 2-lane bridge, there is a school bus full of children in an imminent head-on crash with a [very] intelligent vehicle. Realizing that a horrific/imminent accident is about to occur, this [very] intelligent vehicle decides to take its lone occupant out of the gene pool by driving itself off the bridge, in order to save the lives of many children. This is not such a far-fetched situation.
Now, let us extend this similar scenario a bit further and remove the bus from that same bridge and replace it with a 1968 Camaro SS [my fave]. I am venturing to guess that the driver behind wheel of that antique Chevy continues to drive (and live), while watching the intelligent vehicle (and occupant) do a swan dive off that same bridge. I want to be that Luddite in the Camaro!
1) drivers that grow up with 'assistive' technology are going to be much more accepting of fully autonomous technology.
2) current drivers are very poor at handling 'exceptions' when they are already clearly at the wheel.
3) technology is quickly advancing to where autonomous cars will experience far fewer 'exceptions' (three or four orders of magnitude) than humans do, in the best of situations, and on far higher orders for drivers that are a) drunk b)tired c)inexperienced d)aged e)distracted f)angry g)stressed ...... [go ahead and add to the list]
Fully autonomus vehicles using existing roadways, no added infrastructure, are a rapidly approaching reality. Consumer acceptance will be an issue initially, but that will wash out over time. Particularly once the insurance costs of fully autonomous vehicles are compared to insurance costs on vehicles with manual controls. You may still want to drive your car, but it is going to hit you severely in your wallet.
We are arguing this from the perspective that human piloted cars are perfect -- they aren't. So the comparison can't be between Autonomous and not Autonomous. The real Risk Assessment has to be Autonomous vs. Human. I'm confident that an Autonomous car isn't going to break into the Ethanol and operate drunk. (Perhaps less confident that it won't become inattentive while texting it's robot friends or corporate mother.)
And its not just Autonomous failures, vs. NO traditional car failures either. I have experienced uncommanded acceleration in a 1970-era, mechanical failure, car, and an uncommanded shift from Drive to 2nd at highway speeds in first generation electronic automatic transmissions. Autonomous vehicles are not different in kind, just in scope. They will have to be designed to be Fault Tolerant and to Fail Safe; we actually already have decades of experience with those designs in automotive as electronics and embedded SW were integrated into the designs.
Finally, with respect to the training, airplane pilots require from 40 to 100's of hours training, vs. 5-40 for cars. Only professional pilots/drivers have continuing training requirements mandated for them. None of us require training to operate an elevator, and in the long-term, Autonomous cars will become for our grandchildren what elevators are for us today. You get in, you push the button, and you eventually arrive at your destination.
Or you don't and it (usually) fails safely and you wait for rescue.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.