@Mervynrs, an intersting argument -- sort of coming from the left field.
And yet, somehow I don't agree that the "speed" is the key reason for safety issues of cars. A growing list of all the bells and whistles now added to cars seems to be the cultprit in my mind, although some of those new features are being developed for safety reasons.
As a complete departure from the current approach to automotive safety system inprovement by computerisation, I would like to pose the following question?
Why are regular automobiles designed and built with the capabilty to move at speeds that all agree can cause lethal harm? If the engines were all "governed down" there may well be no need for most of the layers of safety system architechture in the first place!
The presence of speed limit laws in all nations is admission that we all know what the dominant risk factor is. So why not fix it at the source?
There is an extensive literature on the question 'how safe is safe enough', and you might start with the early chapters of Nancy Leveson's book 'Safeware: System Safety and Computers' (though it is somewhat dated, and she has a new book in the works.)
Forcing a hardware / software dichotomy on the safety question is unwise, as a significant subset of risk involves aspects of both domains, and their interaction.
One issue is 'what are the alternatives?' In the case of anti-lock braking, we add a system that could potentially interfere disastrously with braking, but which, when it works, reduces the frequency and severity of accidents. In the case of a car's throttle, I don't know if there are any compelling reasons for full-authority digital control, from a safety perspective.
It is well established that redundancy can effectively mitigate random physical errors to the point where it is no longer the dominant risk (it is not, however, effective for software errors, as different developers tend to make related mistakes, so the errors in independently-developed implementations of the same requirements tend to be somewhat correlated.)
You quoted Larry comment, "If you look at modern automotive control systems, they are beginning to introduce redundant voting controls" (emphasis added.) This suggests, disturbingly, that the designers of automotive control systems are far behind the state of the art with regard to digital systems safety.
I don't know if you replied to my post by mistake, but nothing in what I wrote could be properly construed as indicating that I doubt the potential lethality of some software, or that I doubt it has actually happened. I read Nancy Leveson's highly informative report on the Therac-25 when it was first published, and I was appalled by the fact that the development of this safety-critical software was entrusted to an unqualified person, and deployed without effective rik analysis, review and testing.
This quote should have made my position clear:
"These risks can be effectively mitigated, if and ony if you make a serious effort to do so." (emphasis added.)
Effective mitigation does not mean 'eliminate all risk' for software any more than it does for any other technology.
We trust machines with our lives every day, and that is fine, but we should also remember that a machine is heartless and relentless and will kill you in the blink of an eye if it gets the chance (and this applies to simple mechanical equipment as well as complex software driven systems), it will feel no regret later, and suffer no consequences.
Trust your life to a machine if you wish, but it should be conscious decision and not just force of habit.
"It makes one wonder how blame can be attributed to software in a system in which the source of the error may have been a random SRAM bit that was flipped by an alpha particle or other natural radiation event."
If that were an unavoidable problem, the undavoidable conclusion would be that digital equipment is unsuitable for safety-critical purposes, especially for things such as a car's throttle, where mechanical linkages have worked well for decades, and so where it's particularly hard to make a case for any additional risk.
The point here, however, is that these risks can be effectively mitigated, if and ony if you make a serious effort to do so. If you are unable or unwilling to do that, do not use digital electronics where peoples' well-being is at risk.
"Is the failure being blamed on software, or is it an overall laxity of hardware plus software..."
None of the above. The blame is being placed on the people of Toyota who, in their complacent ignorance, failed to take reasonable steps to reduce the risk.
I find Mr. Eory's "things break, that's just the way it is" attitude disturbing. No-one with that attitude should have any responsibility in the development or deployment of safety-critical systems, or the policies that govern their use.
The Other Tesla David Blaza5 comments I find myself going to Kickstarter and Indiegogo on a regular basis these days because they have become real innovation marketplaces. As far as I'm concerned, this is where a lot of cool ...