@JCreasy I agree, proactive recalls are much better than after the fact. I heard from amate that BMW had issues in the US that they didn't want to recall, and Mazda here did something similar. Buyers should vote with their feet, not necessarily for the car with the least problems but rather the best after sales care
As a lifelong gear head, I know that worn or broken parts can kill you, but I don't believe claims about unintended acceleration, given functional mechanical pedals, linkages, etc.
I developed a verification and testing process for a firm that developed embedded engine controllers and this all sounds familiar. I'd been dubious about the Toyota failures, but I didn't realize that this car was drive by wire. Buggy software as the root cause of the failure mode is therefore completely plausible, despite no finding of mechanical or electronic failures.
If Barr's report is accurate, the software design, programming, and testing was ignorant, sloppy, and inadequate. The real shame is that this is completely unnecessary – we've known how to achieve very high reliability software systems for a long time without breaking the bank. Model-based testing is now a big part of that.
I'm not sure who's responsible for the hype and inflammatory language ("a single bit flip could...," task death, dead task, dead app), but I guess that's what you have to do to make software failures tangible to a jury. It is interesting no smoking gun is reported (recorded input/state with incorrect output that directly caused the failure - i.e., it is not correct to say that a single bit flip caused the failure.) In a tort case, circumstantial evidence can be sufficient, so it seems that evidence of poor software development alone was enough to convince the jury that it probably caused the failure.
This may be the first time that indicators of bad code (not actual results) were sufficient to get a judgement. If so, I hope this is a wake up call for people who manage this kind of system development and its risks: software hygine isn't a fool's errand.
Hi, Bert. I appreciate a level of skepticism...but let's get too cynical before we know all the facts.
Actually, I find the fact that the experts' group was able to demonstrate at least one way for the software to cause unintended acceleration is a "breakthrough," at a time when the Toyota case -- up until last week -- was viewed by many as an issue of floor mat, sticky pedal or a driver's error.
But as Bert and I have pointed out, ultimately the designers of critical safety systems, both hardware & software designers, are dealing with probabilities, and their task is one of reducing the probability of a dangerous incident to some acceptably low level, which can never be exactly zero.
Those familiar with ISO 26262 know that it defines four automotive safety integrity levels (ASILs) and various metrics, including the probability of violation of a safety goal (PVSG). The highest ASIL level, ASIL D, requires a PVSG (1/hour) of less than 10^-8, which is an order of magnitude less than that required by IEC 61508.
An argument could be made that safety systems which meet such requirements reduce the failure rate of these systems to a level far less than the failure rate of human behavior & decision-making while driving. Again, I am speaking of automotive safety in general, without regard to the particulars of this case. We could imagine future standards requiring even lower error probabilities, but these will never be zero.
Consider that just in North America, vehicle travel amounts to about 3 trillion miles a year, amounting to billions of hours spent behind the wheel. That is a sufficiently large sample size that even with extremely low failure probabilities resulting from best engineering practices to ensure absolute safety, failures will still be seen from time to time. But consider how that compares with the injury and fatality rates caused by human error.
Even as modern automotive safety systems make driving safer every year, and as we look toward a future in which we humans are merely passengers in our vehicles and safety incidents are rare, the burden of responsibility and the cost of failure borne by the providers of these systems is far greater than it ever was or is on the fallible humans whose errors cause so many injuries and fatalities every day on our roads.
Blog Doing Math in FPGAs Tom Burke 2 comments For a recent project, I explored doing "real" (that is, non-integer) math on a Spartan 3 FPGA. FPGAs, by their nature, do integer math. That is, there's no floating-point ...