@Bert, as usual, thanks for your thoughtful comment here.
I do, howevever, believe that things aren't exactly as black and white as you mentioned.
As I understand it, the auto industry still needs a better legal framework for self-driving cars.
In a recent blog post, Strategy Analytics' Roger Lanctot, pointed out:
As reported in ITS International: "Article 8 of the 1968 Vienna Convention on Road Traffic specifies that the driver must maintain permanent control of the vehicle. This limitation was amended, however, in March 2014 in response to the increasing automation of vehicle systems. Automated systems are now permitted as long as they can be overridden or deactivated by the driver. This has established the legal foundation for partially automated driving since control of the vehicle may now essentially be assumed by systems as well.
Now if true, that makes the line between who is at fault less clear, doesn't it?
This subject reminded me of one of my favorite SF stories: "The Marching Morons" by C. M. Kornbluth published in 1951. It's set in the somewhat distant future; the average IQ (by current standars) of Earth has fallen to 45, and civilization continues solely due to the efforts of the tiny minority of competent folks (0.6% of the population). One of the many items they created were pretty much autonomous vehicles, which APPEARED to be driiven by the "driver." Quite prescient was Cyril M. K. ! There is a version avilable on-line; I won't give a link as the copyright is still valid....
While many of the underpinnings of this tale are certainly "un-PC" it does raise some questions as to the societal impact of this type of technological progress.
There is a distinct line beween ADAS (where the technology pratically does everything, allowing drivers not to do anything actively, for lane change, collision avoidance, etc. ...but drivers are still required to sit behind the steering wheel by the law) and a fully autonomous car with no steering wheel and a brake pedal.
But here's the thing. ADAS will continue to advance.
Many in the insurance companies are concerned about the moment when ADAS feature fails, requiring a driver of a self-driving car (assuming that there is still a steering wheel and a brake pedal and a driver is still behind the steering wheel) in a split-second to switch to hands-on driving. Now, where does the fault lie?
With ADAS, no matter how advanced, the driver will always be held responsible, because he or she is still in control. At the moment in which ADAS becomes fully autonomous and the human occupant is merely a passenger, the liability will shift away from the "driver" toward the manfucaturers. I agree with Bert -- this fact alone may prove to be a strong deterrent to fully autonomous vehicles.
If all someday cars are self-driving and all cars have collison avoidance, would there ever be car-on-car collisions again? Everyone would of course have to maintain their car's computer and camera systems, etc. Would there even be massive car pileups, of the type that happen in dense fog? Perhaps the cars' radar or lidar would stop that from happening.
I'm a little unsure when you're talking about truly autonomous vehicles, as opposed to driver assistance. If an accident occurs, who is at fault should be possible to determine in either case, especially because (I have to believe) the more autonomous, the more the vehicle will be provided with a "black box." My car, already several years old, has one, associated with the OnStar system.
In assisted driving, where the driver is always on the job, as it were, I would expect the blame to go to the driver most of the time. For instance, certainly ABS helps a driver in panic braking situations. However if it can be shown that ABS didn't quite "help enough," because the car skidded to much ragardless, allowing the accident to happen, my sense is that the human driver who is at fault will be found culpable, rather than the ABS system (it could have been tweaked better, some "expert" might assert, but I doubt that will carry a lot of weight in court).
Ditto for the automatically stopping systems, also meant to be driver assistance systems. If the system doesn't quite prevent the accident from happening, the driver who would otherwise have been found guilty will still be found guilty.
Fully autonomous is another thing. Black boxes should help determine the cause, and the auto manufacturers will be shelling out the $. I agree that this fact alone may help deter migration to full autonomous driving. So I have to agree that no-fault car insurance would likely be a prerequisite to autonomous vehicles, for this reason.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.