MADISON, Wis. — Toyota's unintended acceleration case, recently tried in Oklahoma and resulting in a jury verdict that found Toyota liable, exposed key discoveries by embedded systems experts who had access to Toyota's "electronic throttle" source code.
New findings include defective software that contains bugs, and -- in the 2005 Camry -- an electronic throttle control system with inadequate safety architecture, whose design created a single point of failure with no redundancy in place.
At this point, EE Times does not have access to the 800-page report, which concluded that misbehavior by Toyota's electronic throttle control system was a cause of unintended acceleration, filed by Michael Barr, CTO of Barr Group. Barr also served as an expert witness in Oklahoma.
(The full report is in the hands of several lawyers. A redacted version of the report was filed in US District Court in Santa Ana, Calif., in St. John v Toyota on April 12, 2013, according to Barr.)
But based on the court transcript of the Oklahoma trial and interviews with experts, EE Times has reconstructed what the jury heard.
EE Times has posted a series of stories examining technical issues. They included a bit flip caused by memory corruption; the death of "Task X," which ultimately caused loss of throttle control and also a disablement of a number of the fail-safes; what components were (or were not) inside Camry's electronic control module; and what the regimen of vehicle testing ultimately found.
As a result, expert witness testimony during the Oklahoma trial -- now in public record, thus published by EE Times -- has opened the door to lively debate among EE Times community members.
Our readers discussed:
- Their own struggle with "probabilities" (software can never be 100 percent free of bugs, and there are ways to mitigate errors, but how far they must go to lower that probability)
- Compliance with software programming and automotive electronics standards
- Whether today's complex automotive software needs a peer review
- What roles NHTSA should play in the future
- Inadequate design and testing done by Toyota engineers
- Black boxes in cars
- The driver's responsibility
They also discussed the safety of the emerging self-driving cars, especially after hearing about the faulty software in the Toyota case.
Many EE Times readers are engineers engaged in designing systems or chips -- safety critical or not. EE Times readers took Toyota's unintended acceleration case to heart. After all, Toyota's failures, pointed out by the expert witnesses, aren't just Toyota's problems. Its failures are, to a degree, very relevant to all the hard choices engineers make when designing software and hardware architecture for their systems.
EE Times offers a summary of what EE Times community members have learned, argued, and suggested on the EE Times forum on Toyota's unintended acceleration case.
All about probabilities?
The issue of probabilities came up often in a number of threads on the Toyota case at our forum. How often does an error happen? Assuming such an error (e.g., bit flip) happens so rarely, our readers asked a legitimate question: How low is low enough when it comes to the probability of failures? As engineers dealing in probabilities every day, they're concerned about the implications for future design of critical safety systems.
Suppose that the engineers carefully SEU (considered single event upsets), and included fairly powerful ECC (error-correcting code) to guard against its ill effects. Perhaps they even considered how much higher the SEU rate might be in a high-altitude city during peak solar flare activity. Is that enough? As I mentioned above, we're still dealing with probabilities that can never be zero.
I am in no way trying to defend buggy software or buggy hardware, I'm just asking, how far does one have to go, and will it ever be far enough?
I've worked around control software for nuclear devices, which obviously operate by a different set of rules than just about any other. One interesting safeguard is testing within the body of critical functions to ensure that the function was entered at the top, rather than as a random jump into the body of the code (potentially the kind of error that could result from cosmic rays)…
…If you look at modern automotive control systems they are beginning to introduce redundant voting controls. This is an effective way of effectively eliminating this type of error, be it from hardware or software.
… as I said before, there are millions of vehicles on the road with this defective software. The loss of control condition is not occurring very often or we would be seeing a lot of Camrys in the ditch or being hauled to the scrapyard.
Still, it CAN happen -- 'under what conditions?' is, perhaps, a question that cannot be answered. And maybe that points to the core of the issue -- the software that controls safety-critical systems must be deterministic, that is, it must do action Z in case Y in time t +/- tx where tx << t. Clearly the Toyota engine control software does not conform to this requirement. Why are we, as a society, letting Toyota off the hook here? Because it doesn't happen very often?
…It seems to me that Mr. Barr's work represents that unequivocal data -- this CAN happen and, as engineers, we all know that what CAN happen WILL happen sooner or later.
So, what is to be done?