I don't believe you will ever see automatic cars be rigorously tested - there are just too many possible situations and variations of each to test. We certainly don't test humans to that standard, but we accept the risk of having human drivers (with all their flaws and impairments).
Perhaps a reasonable threshhold IS by demonstration, that being when automatic cars are aproximately as capable as humans.
@SPLatMan, you raise a very good point that should not be forgotten: If everyone in every commercial market imaginable focuses solely on marketing only "perfect" devices that have no associated human safety or financial risk associated with them, the inevitable outcome is what I like to call the "rock strategy."
The rock strategy -- and I've met real people who seem to advocate it -- is this: If everyone would just get rid of all those blasted computers on their desktops and in their pockets and replaced them with rocks, all of those terrible security, privacy, and security associated with computers and modern communications would disappear.
The trouble is that most of us would also disappear, and rather unpleasantly at that. Our global economy long ago passed the point where our world can produce and ship food with sufficient efficiency for all of our population to survive without assistance from computers and communication networks, a premise that was vividly and horrifically explored (with engines failing also) in S. M. Stirling's novel "Dies the Fire."
So the trick is always this: What is the right balance of new functionality and new risk?
If done well, super-safe autonomous vehicles could be a huge and very real benefit to a large chunk of the population that has trouble driving safely. So one can argue, rightly I think, that if you can prove that you are hugely reducing overall risk to the population and individuals by enabling super-safe, but not perfect, automaton for vehicles whose driving is legal but impaired, you have a strong argument for autonomous vehicles arriving sooner, not later.
I think what is going on here, though, is not a search for perfection, but the need for a better grasp of the "unknown unknowns" of deploying huge numbers of autonomous vehicles quickly. That infamous but often misunderstood phrase still best captures where the greatest dangers are in any massive new technology deployment. No one thought about strong metal fences along highways as anything but protective, until vehicles started getting sliced in half by colliding with the pointy ends of them. The dangers of metal highway fences were at that time an unknown unknown, a category of new risk that had not even been part of the mental model that people used to assess highway driving safety.
Almost unequivocally, autonomous driving deployed rightly can in the not-to-distant future both reduce risks and make life for a significant part of our population much easier.
But as Dr Cummings is trying to warn us, our balance looks to be off. We are leaning uncomfortably far towards letting the unknown unknowns of autonomous driving pop up in much the same way they did for metal fences along highways. But with modern analysis and modeling tools, it's likely that both these and more traditional safety issues can be foreseen in advance with sufficient precision to enable a safer, but still timely, deployment of very useful autonomous vehicles in the right markets.
Consider what liability insurance would cost for an autonomous car. One thing is for sure, until the actuaries have millions and millions of actual usage stats to review, insurance companies will charge through the roof for liability policies, if they offer them at all. When peoples lives are at stake, potentially lots of people, with no good data to rely on to predict average results, the price will be extremely high. Probably more than the cost of the car for a few months of liability insurance, until there is lots of data from lots of autopilot cars logged over many years. It ain't gonna happen in large numbers for a long time, if ever.
Certainly long past my death, gaming the AVs will be easy and even fun. They will be so predictable. I like driving my car so no AV for me. My boyish evil streak will get the satisfaction it needs. I can't wait.
Has anyone considered what a boon autonomous vehicles would be to terrorists? No need for a suicidal co-conspirator to deliver your package of oily fertiliser. Just load up the vehicle, set the target coordinates, and wait somewhere safe for the bang.
@Bert22306, I'm in a hurry! I am nearly 70 years old. I'm still driving well, and my health is holding up, but how long will that last? When must I hang up my driving gloves, goggles and scarf? I'm banking on DVs by 2022, maybe as an Uber style service.
These cars must have the same type of rigorous testing, and government approvals, that aircraft get. Aircraft can fly themselves, so they can set the bar for testing these cars... However, testing and approval may be the easy part of getting these cars into daily use on our roads. Wait until the first accident occurs.Then the insurance companies, the manufacturer, And the courts will begin a battle that will take a decade, and many courts, to complete. And of course the driver must have insurance before driving the car. Oops, no human driver, so does the software purchase insurance? How old must the software be, sixteen? And if you do a software upgrade, does the software's driving record get deleted? Additionally, can an intoxicated person be the sole person in one of these cars? Wait, can the car take a five year old to school. Does the software get the speeding ticket? Yadda, yadda, yadda. my guess - these cars will be in the showrooms and on our highways in the year 2042.
Tesla's 'autopilot' mode can give engineer's today a glimpse at autonomous driving in the field, when it goes right and wrong. Much of this is published online and on youtube. A brief trial by myself reveals much to worry about.
Even if autonomous driving worked 99% or more of the time, the more autonomous it is but yet not 100%, lulled me into complacency. When will that need to take over happen? Given its unpredictable, hence not in the algorithm, the driver has to be alert all through the period of autonomous driving. If a driver can't take their hands off the wheel, or their feet far away from the pedals, if they can't text, watch videos or sleep in between, they might as well be driving all the way. It will be like having a 'student' driver on the wheel, you let them drive but you have to jump in vigorously at a moments notice, a moment that is not predictable and hence 'unreliable.'
Its probably better if passive subsystems of an autonomous driving system were widely deployed and tested today, such as those in accident avoidance systems. From collision warning to active braking systems, they are driver assistance devices rather than replacements. These devices use the sensors and even core algorithms with or without active intervention on the vehicle, an autonomous driving system requires. It thus would field tests segments of the system in response to various weather conditions, all forms of interference, and system aging, 24/7/365 while being only adjunctive to driving and not mission critical. Once that data is obtained, the final piece, the 'brain', can be inserted but at least the maker has field data that give confidence that the eyes, ears, and limbs, of the brain are as good as they can be.
@junko.yoshida, I'm hope that you or one of your co-authors is working on a technical take on what happened when Microsoft put their Tay learning-bot on line last week. They had to sack Tay less than a day later as she moved with remarkable speed from innocence to being a foul-mouthed racist.
Learning is presumably a minor component of automotive autonomy, but it's hard to call any device "autonomous" if it doesn't include something that looks a lot like learning.
My specific question is whether some variant of Tay Sacked Syndrome could creep into autonomous cars. If for example autonomous cars can adapt to the specific driving needs and routes of their owners, some owners may figure out how to teach their vehicles over time to become lax about critical aspects of safe operation. And that doesn't even touch on what happens when hackers, both friendly (self-directed) and unfriendly, get involved.
The fact that Tay failed quickly and in an unanticipated fashion is the type of point that Dr Cummings was trying to make in her hearings before Congress. Simply putting autonomy out into the field on a large scale, without really knowing what will happen, has a potential to lead to unanticipated bad outcomes. Tay thankfully was not in control of any life-critical systems, so the damage caused by her rapid failure was highly constrained. However, for a family in an autonomous vehicle, the implications of emergent scenarios that no one adequately anticipated is worrisome, to say the least.
A few years back, Dr Cummings did more to keep the now-emerging small-drone industry alive and expanding than any one person I know of. With her excellent media explanations of the commercial potential of small drones, she helped persuade people to give these devices a chance at a time when most folks associated the word "drone" only with military uses.
I am confident that the reason why Dr Cummings testified about this validation issue is much the same: She wants to keep an important emerging industry healthy and growing by keeping them from shooting themselves in the foot early in the game.