Although the quote about the danger of a "single bit flip" seems to have been in the context of software bugs -- it's hard to tell just from the quotes in this interview -- Barr also mentions single event upset. Memory bit errors (so-called "soft error rate") are a more of a hardware & system design issue, at least to the extent that the design includes mirroring, error detection and/or correction or other fail-safe measures.
At modern VLSI geometries, the soft error rate of an SRAM bit cell being bombarded with cosmic radiation at ground level is not as inconsequential as one might think -- especially for critical safety systems.
It makes one wonder how blame can be attributed to software in a system in which the source of the error may have been a random SRAM bit that was flipped by an alpha particle or other natural radiation event. Is the failure being blamed on software, or is it an overall laxity of hardware plus software that failed to prevent all of those 16 million possible ways a software task can die? How much fail-safing & hardware redundancy is enough to adequately protect against these events? In the end, it is a probabalitic issue, and the probability of failure will never be zero.
I've worked around control software for nuclear devices, which obviously operate by a different set of rules than just about any other. One interesting safeguard is testing within the body of critical functions to ensure that the function was entered at the top, rather than as a random jump into the body of the code (potentially the kind of error that could result from cosmic rays). One of the guys on our team was former military, and he told us that they had running bets whether the missiles would actually fire, given a valid control sequence. None of them believed that it would fire by accident.
If you look at modern automotive control systems they are beginning to introduce redundant voting controls. This is an effective way of effectively eliminating this type of error, be it from hardware or software.
If I may expand on my above comment a little further:
"Memory corruption as little as one bit flip can cause a task to die. This can happen by hardware single-event upsets -- i.e., bit flip -- or via one of the many software bugs, such as buffer overflows and race conditions, we identified in the code."
So he mentions hardware SEU, but also software bugs like buffer overflows & race conditions, which makes me wonder the following:
Consider a hypothetical safety-critical system that many might consider very well-engineered. Suppose that the software in this system is so well done & well-tested that there are no buffer overflows, no race conditions, no possibility of software-induced memory corruption whatsoever. In this hypothetical near-perfect system, the only way for memory to get corrupted is by SEU, and then only if the SEU goes uncorrected or the fail-safe systems fail to guard against it.
Suppose further that the engineers carefully considered SEU, and included fairly powerful ECC to guard against it's ill effects. Perhaps they even considered how much higher the SEU rate might be in a high-altitude city during peak solar flare activity. Is that enough? As I mentioned above, we're still dealing with probabilities that can never be zero.
I am in no way trying to defend buggy software or buggy hardware, I'm just asking how far does one have to go, and will it ever be far enough?
Larry: I had already posted the above before I saw your reply.
"If you look at modern automotive control systems they are beginning to introduce redundant voting controls. This is an effective way of effectively eliminating this type of error, be it from hardware or software."
Redundanct voting controls, dual CPUs running the same code in lock step, and so on. But the key statement you made is that these are a way of "effectively eliminating this type of error" and I am asking how effective must "effectively" be, in quantitative terms?
I am in no way trying to defend buggy software or buggy hardware, I'm just asking how far does one have to go, and will it ever be far enough?
A fair question, but you don't want to go with the logical fallacy of "If it isn't 100% then it is useless." Examples of this type of thinking:
A seat belt won't protect you from all accidents, so you might as well not wear one at all.
A car lock won't protect your car from all theives, so you shouldn't even bother locking the car. In fact, make it more convenient for yourself by leaving the keys in the ignition.
Is a seat belt good enough if people are still dying in car crashes? Do you see the fallacy of this type of thinking?
We'll never get 100% safe, but I'll defintely go for 'safer'. And we can have standards and tests for what is considered safe design practices that lead to what is safe enough.
We understand SEUs and their effect pretty well. To support military projects, many logical synthesis tools can automatically implement logic that isn't vulnerable to single bit flips. People here have given examples of how code can be designed to handle unexpected jumps or variable flips, and these kinds of effects can be predicted and tested.
You can never get 100% error free, but implementing certain design styles and testing can definitely improve safety. I'll go with that, over nothing at all.
Frank, just to clarify the findings by the experts' group in this case, let me add a few more details.
Accorinding to the experts group,
"2005 Camry L4 source code and in-vehicle tests confirm that some critical variables are not protected from corruption. For example, a)Mirroring was not always done; and b)No hardware protection against bit flips."
The group also found "sources of memory corruption are present." The group referred to that "Stack overflow can occur; and there are software bugs -- NASA found bugs and Barr Group has found others."
The group, thus, concludes that they found enough evidence that "Toyota's ETCS software can malfunction."
There is an extensive literature on the question 'how safe is safe enough', and you might start with the early chapters of Nancy Leveson's book 'Safeware: System Safety and Computers' (though it is somewhat dated, and she has a new book in the works.)
Forcing a hardware / software dichotomy on the safety question is unwise, as a significant subset of risk involves aspects of both domains, and their interaction.
One issue is 'what are the alternatives?' In the case of anti-lock braking, we add a system that could potentially interfere disastrously with braking, but which, when it works, reduces the frequency and severity of accidents. In the case of a car's throttle, I don't know if there are any compelling reasons for full-authority digital control, from a safety perspective.
It is well established that redundancy can effectively mitigate random physical errors to the point where it is no longer the dominant risk (it is not, however, effective for software errors, as different developers tend to make related mistakes, so the errors in independently-developed implementations of the same requirements tend to be somewhat correlated.)
You quoted Larry comment, "If you look at modern automotive control systems, they are beginning to introduce redundant voting controls" (emphasis added.) This suggests, disturbingly, that the designers of automotive control systems are far behind the state of the art with regard to digital systems safety.
Frank, yes, the memory corruption referred here is caused by software defect.
Now, there are different types of software defect that causes memory corruption. They include: -Buffer Overflow -Invalid Pointer -Dereference/Arithmetic -Race Condition(a.k.a., "Task Interference") -Nested Scheduler Unlock -Unsafe Casting -Stack Overflow
The experts' group found software defect in 2005 Camry L4 in every single item listed above.
"It makes one wonder how blame can be attributed to software in a system in which the source of the error may have been a random SRAM bit that was flipped by an alpha particle or other natural radiation event."
If that were an unavoidable problem, the undavoidable conclusion would be that digital equipment is unsuitable for safety-critical purposes, especially for things such as a car's throttle, where mechanical linkages have worked well for decades, and so where it's particularly hard to make a case for any additional risk.
The point here, however, is that these risks can be effectively mitigated, if and ony if you make a serious effort to do so. If you are unable or unwilling to do that, do not use digital electronics where peoples' well-being is at risk.
"Is the failure being blamed on software, or is it an overall laxity of hardware plus software..."
None of the above. The blame is being placed on the people of Toyota who, in their complacent ignorance, failed to take reasonable steps to reduce the risk.
I find Mr. Eory's "things break, that's just the way it is" attitude disturbing. No-one with that attitude should have any responsibility in the development or deployment of safety-critical systems, or the policies that govern their use.
We trust machines with our lives every day, and that is fine, but we should also remember that a machine is heartless and relentless and will kill you in the blink of an eye if it gets the chance (and this applies to simple mechanical equipment as well as complex software driven systems), it will feel no regret later, and suffer no consequences.
Trust your life to a machine if you wish, but it should be conscious decision and not just force of habit.
I don't know if you replied to my post by mistake, but nothing in what I wrote could be properly construed as indicating that I doubt the potential lethality of some software, or that I doubt it has actually happened. I read Nancy Leveson's highly informative report on the Therac-25 when it was first published, and I was appalled by the fact that the development of this safety-critical software was entrusted to an unqualified person, and deployed without effective rik analysis, review and testing.
This quote should have made my position clear:
"These risks can be effectively mitigated, if and ony if you make a serious effort to do so." (emphasis added.)
Effective mitigation does not mean 'eliminate all risk' for software any more than it does for any other technology.
One of the challenges in automotive software design is fail-safety. Ideally, you want the failure fallback to be at minimum passive, at best natural - that is, you want the path of least resistance upon failure to result in the safest outcome. That's not such a big problem with acceleration - if the driver has a heart attack, then (in most cases) his foot relaxes from the accelerator and the car at minimum won't go faster and will ultimately slow to at least idle speed. The natural course of events happens to be the best one for safety, and requires no additional effort to invokde (in other words, it is passive). Obviously, active subsystems could be applied to the situation to improve safety - invoking the brake, for example - but the natural behavior contributes to, rather than detracts from, safety.
Braking is an entirely different problem. First, because of legacy mechanical designs in cars, braking is not a passive behavior - you have to DO something. Consider that one of the scenarios in which braking would be invoked as a fail-safety measure would be in the event of system power failure. Where would power come from to depress the brake? Where would power come from to feed the electronics to compute the need to depress the brake? Complicating the issue is the fact that many power brake systems are vacuum-assisted - vacuum that goes away when the engine stops, markedly altering the amouint of pressure needed to activate the braking system. In hybrids, this is further complicated by hardware and electronics that attempt to tap the residual and excess energy being thrown off during braking.
Electric cars can be made significantly more fail-safe for braking, because the electric motors used are also generators when the stator is not being powered (and generators cause drag).
The best solution for the problem is simple, but requires a re-design of legacy technology. That solution proposes electronically-regulated electric clutching at the drive wheels, where unless all systems throughout the vehicle are optimum no forward power is allowed to pass through to the wheel, and when not expressly powered to go forward the braking systems are by default engaged.
I've been professionally writing software for thirty years, and there is no code I can imagine that could be deployed to solve the issue through electronics alone that wouldn't have bugs, and therefore fail under some condition.
One way to carry the passive concept a step further would be for the "dead man switch" to invoke braking rather than merely releasing the throttle. I believe that already exists in trains. The challenge in cars is that we have have two controls: the throttle and the brake rather than a single control that when released would return the car to a statioary condition. We also have to be careful about stopping too quickly if the surrounding traffic isn't following suit. As a minimum the "dead man switch" would do well to provide some external indication that the car is stopping. Currently releasing the accelerator does not cause the brake lights to illuminate.
One of the safety standards for automotive electronics systems is ISO 26262, which was adapted from the standard IEC 61508, a popular safety standard followed in the industry for "Functional Safety" of programmable electrical and electronics system. I have worked on a number of safety programs per IEC 61508. From my experience, the possible causes of failures: such as "unprotected critical variables", "...tasks can die without the watchdog resetting the processor", erroneous bit-flip undetected etc. could have been averted if the embedded design of the throttle control system was compiled as per the safety standard. Unfortunately ISO 26262 was introduced after 2007 (most probably in 2010-2011 time frame) and I guess it was not mandatory for the automotive industry to comply with IEC 61508 for the automotive electronics system safety before that.
@Antony Anderson: Thanks for sharing the link! I am from the industrial automation domain and I have been seeing a direct influence of the customers asking for complaince to IEC 61508 more than the regulatory authorities, which eventually has made the regulatory authorities in US, EU to make it mandatory for industrial safety critical systems. Unfortunately, in the automotive space, technology is advancing in a fast pace (electronics being used more and more) in comparison to the pace at which standards are upgraded, regulatory bodies bringing the necessary requirements/norms in, making it mandatory for the automobile industry to get their systems certified by the independent assesors such as TUV / Exida.
Unless you design a totally benign product you should expect your code to be examined by an expert witness as a matter of certainty -- Ford and Chevy do not get to see all the code usually -- Just the expert witnesses for the prosicution and defense -- This was how $80 per share Honeywell stock of mine became $16 per share after a 757 related verdict came out that the design should not have required manual flipping of the Nav database data from one bank of memory to another by the pilot when crossing a certain line on the globe ----- In the case it was shown that other compiler vendors had the technology to automatically do this bank switching automaticaly --
But as Bert and I have pointed out, ultimately the designers of critical safety systems, both hardware & software designers, are dealing with probabilities, and their task is one of reducing the probability of a dangerous incident to some acceptably low level, which can never be exactly zero.
Those familiar with ISO 26262 know that it defines four automotive safety integrity levels (ASILs) and various metrics, including the probability of violation of a safety goal (PVSG). The highest ASIL level, ASIL D, requires a PVSG (1/hour) of less than 10^-8, which is an order of magnitude less than that required by IEC 61508.
An argument could be made that safety systems which meet such requirements reduce the failure rate of these systems to a level far less than the failure rate of human behavior & decision-making while driving. Again, I am speaking of automotive safety in general, without regard to the particulars of this case. We could imagine future standards requiring even lower error probabilities, but these will never be zero.
Consider that just in North America, vehicle travel amounts to about 3 trillion miles a year, amounting to billions of hours spent behind the wheel. That is a sufficiently large sample size that even with extremely low failure probabilities resulting from best engineering practices to ensure absolute safety, failures will still be seen from time to time. But consider how that compares with the injury and fatality rates caused by human error.
Even as modern automotive safety systems make driving safer every year, and as we look toward a future in which we humans are merely passengers in our vehicles and safety incidents are rare, the burden of responsibility and the cost of failure borne by the providers of these systems is far greater than it ever was or is on the fallible humans whose errors cause so many injuries and fatalities every day on our roads.
It's certainly the case that tasks can die, and require a system reboot. That's why you have watchdog timers in control system software. In the description of the problem, it appears that several tasks died simultaneoiusly, although we don't know which tasks nor how simultaneous they were.
And it's also not clear whether individual task were monitored correctly, and whether it was the simultaneous nature of the failures that created a case where the reboots didn't occur.
Also, it looks like they found several potential mechanisms, not necessarily THE cause. One way to design around this sort of problem, although nothing will be 100 percent, is to have redundant processes do the same computations, and then compare the control signal at the output. If there's no match, you default to no acceleration.
The last safety measure is of course the driver. If unintended acceleration occurs, certaily in a 2005 car, put the car in neutral and shut off the engine!
yeah, throwing it in neutral sounds like the ultimate solution. however, in that instant of completely unexpected acceleration, much damage can be done before even the most vigilant person can respond.
True, Caleb, "mere humans" can be taken by surpise and perform all sorts of erroneous responses.
But in this specific case, where we're talking about the throttle, it's not clear what was involved. For example, it does not appear difficult to compare the throttle command to the fuel intake with the accelarator pedal position, as a reasonableness check. Is it that such a check was not done, or that for some reason, it failed? Or was it associated with a cruise control malfunction?
"If unintended acceleration occurs, certaily in a 2005 car, put the car in neutral and shut off the engine!"
One thing you need to be aware of - in most modern automobiles with an automatic transmission, shifting gears is really more of a "suggestion" than a command.
Said another way, there is a CPU in between the gear selector switches that are being opened and closed, and the transmission. If the very CPU which is causing UA is responsible for monitoring those "gear shift suggestions"... oh dear! So much for shifting into neutral.
I drive a manual transmission because it's fun, but I'm starting to see the value in the ability to physically disconnect the transmission from the engine.
I don't think we'll see the mechanical connection from pedal to brakes go away any time soon, but I wonder how far away we are from "Steer by Wire"
P.S. Same thing goes for many of the "push button start" vehicles - there is no key to rip out of the steering column. Press and hold the ON/OFF switch for a few seconds while hurtling down the road at 130MPH like Rhonda Smith? (Just find her 10 minute testimony on YouTube and tell me she's not credible!)
The driver claims that full-force braking had been applied. If it is true indeed (though there is no particular reason to take it for granted), then the failure must have had crippled both E-gas (forcing the engine to overrev) and ESP/ABS/brake assist that could potentially loosen pressure on brakes.
Had to at least skim through transcript for details, though (thanks for providing it).
Speaking of standards, though, the expert group did find that Toyota failed to comply"OSEK," an international standard API specifically designed for use in automotive software. Toyota's Ex-OSEK850 version was not certified as OSEK compliant, according to Barr.
"failed to comply" suggests that OSEK compliance was mandatory. I'm pretty sure that's not the case. More in general, it seems to me that the article could do a better job in providing context.
A 2005 electronc controller was most likely designed in 2002, given the long and rigorous tests that are standard practice in automotive. So it may be unfair to compare a 2002 design with what is considered state of the art in 2013.
Somebody else has already pointed out that ISO26262 did not exist then, but also I would bet that automotive grade dual-core lock-step microcontrollers with SRAM ECC did not exist then.
Technology goes forward by improving on the existing state-of-the-art, but that is a moving target.
It would be great if Barr Group could share their calculation of the probability of occurrence of the failure mechanisms they identified, and if they could compare such probability with the probability of a mechanically-only failure and also with the probability of electronic failure in other manufacturers' vehichle of the time. Which I think is the definition of state of the art.
The "failed to comply" simply refers to the general OSEK compliance testing.
There is (or was - it's been a while since I was involved in OSEK) a requirement that you submit your OSEK implementation for compliance testing before you are allowed to call it an OSEK-compliant operating system. Toyota's OSEK apparently hadn't been submitted for this testing, so was not officially OSEK-compliant (and couldn't legally refer to their OS as OSEK - as the trademark terms for OSEK say the only permitted use is for compliant OSs)
In most of the automated machine control systems - either hardwired, PLC based , or computerized, there is one big RED push button on the control panel, labelled "EMERGENCY STOP" This button when pushed deactivates all electronic controls and brings the machine to halt at whatever state it is.
A similar master stop button in the hand of the driver may be the solution for all kind exigencies arising out of hardware/software malfunction and avoid many a accidents because of the systems failures which require emergency manual intervention.
For the much touted "self-driven" car designers this is a lesson to learn.
This may be ok in an automated or warehouse situation. In general, humans are not in the machines being stopped by hitting the E-STOP switch or you have people stand clear before you do it (like when administering a shock from a difibrillator).
However, in a car, that is highly dangerous. Take a drive by wire car. What happens were you do hit an E-Stop button that disengages everything? Physics isn't bound by the E-STOP. That car will continue traveling in the direction it is moving (likely now skidding or sliding and if you're lucky that road compliance doesn't cause the steering to move around) with no way for the driver to control it's motion. You can't steer out of trouble, you can modulate the brake, if the doors are locked or windows closed, can you then open them?
Without manual controls that can control some of these things or ejector seats that activate when you hit the E-STOP, doing so in a car is very likely more dangerous than having the car attempt to recover (or continue to malfunction in a particular way).
This may be true for other auto maker. The speed at which new technology come to market, they are also prone to similar error. What are the steps suggested to prevent future errors like this. Is it really possible to prevent it 100%?
There is in my opinion no way of 100% preventing an uncommanded wide open throttle condition occurring from time to time somewhere in a world population of over 1000 million vehicles. However the effects of a wide open throttle could be largely prevented by means of a totally independent fail-safe, a kill switch for example, that reduces engine power in an emergency. The present situation in which drivers have to brake against full engine power is totally unnecessary and potentially very dangerous. From a functional safety point of view it is unacceptable to make the driver the fail-safe for the malfunctioning electronics.
Hot News for 1897 – Preventing runaway electric taxis
Improved electric hansom cabs introduced by the Electric Vehicle Company in New York powered by two Westinghouse motors of 1.5 kW and 8oo RPM.
An emergency button operated by the driver's heel, could render the whole system powerless
Moms,Gijs: The Electric Vehicle –Technology and Expectations in the Automobile Age
"There is in my opinion no way of 100% preventing an uncommanded wide open throttle condition occurring from time to time ..."
Yes, this is all about probabilities, and btw, exactly the same holds for mechanical throttles. I myself had this occur to me, in the pre-electronic car control era (well, not full throttle, but certainly open throttle).
It should not be too difficult to design throttle controls that only give the command for a short period of time, which is the way we tend to do this sort of control. E.g., you read the throttle or other command signal at, say, 10 Hz. If the signal is not consistent for n hits, you either go to an alternate source, or you fail safe. If updated commands are not received by the output process, again you fail safe. If the output process dies, again the throttle controller fails safe.
I've had one odd situation in which the valves we were controlling would slowly cycle open with only a single discrete command. So EVEN IF that single discrete command corrected itself after 100 msec, the valve would continue to cycle to full open, before laboriously beginning the close cycle. A very unfortunate combination of events. Since changing the valves appeared to be impossibly difficult, we ended up dramatically improving the error detection logic before closing any discrete signal to valves, which solved the problem (well, at least for several 10s of thousands of years, doing the statistical analysis).
I suspect that the Toyota throttle issue might be caused by a similar combination of unfortunate coincidences.
Bert22306 I think that your thoughts might be centering in the right area. Your valve controller analogy is probably a fairly good functional fit to the electronic throttle control, except that I would imagine that a valve controller drives the valve both open and closed whereas my understanding is that in toyota's case the PWM driver for the H bridge motor is driven open and it is spring pressure that closes the throttle valve until the limp home position is reached, after which the H bridge reverses and drives the throttle to the fully closed position.
There is an interesting redacted statement in Appendix A of the NASA report which reads:
"A.220.127.116.11 Duty-Cycle Conversion The duty cycle conversion modifies scales the command based on the battery voltage and converts the signal to a duty cycle percentage. The duty cycle conversion operates at a rate of 16 ms"
So the H bridge controlling the motor voltage, instead of working from a constant voltage supply, as I think would normally be the case, switches the DC supply voltage, which of course is far from constant, and the duty cycle is adjusted by the ECU to compensate for changes in the supply voltage. My personal view is that Toyota would have been well advised to regulate the voltage to the PWM with a standard voltage regulator and not try to combine the regulating function with the ECU software function controlling the throttle angle. It must surely add unnecessarily to the computing load on the ECU. Functionally the two configurations are effectively the same, but practically are very different.
The person who has done a lot of work on the implications of the duty cycle conversion is Dr Ron Belt who has written up several technical memos for circulation which you and others might find interesting as a stimulus to your own thinking. If you Google "Belt Hypothesis Toyota" you will find two memos on the subject which are hosted on my website.
Now there is another aspect to the toyota throttle mechanism itself that may be relevant and that is if you plot DC motor current against throttle angle you get a very wide hysteresis loop so that the current has to be greatly reduced before you get any reduction in throttle angle. This stiction is not mechanical stiction and appears to depend on the motor armature current.
Now this is with DC excitation and it might be different with a 500 HZ pulsed DC voltage from the PWM because you might expect to get a certain amount of jitter which might overcome the stiction. What is notable about this "stiction" is that it is much greater than the normal mechanical stiction. I have yet to take a motor to pieces and check the design, but a possible explanation is electromagnetic cogging torque. This could be very dependent on manufacturing tolerances if the airgap is small.
So in reality I suspect that we may be seeing a combination of a whole variety of factors including electromagnetic design of the motor, the gearing, the design of the PWM the means for compensating for changes in battery voltage, timing errors, the software,not to mention electrical contact intermittencies, all of which very occasionally might combine together to cause the throttle to move to the wide open position and remain there but which under other circumstances might, for example, result in a sudden uncommanded deceleration. It will be interesting to see what comes out from under the Toyota all-weather floormat as a result of the Bookout case within the next few weeks.
Bert, I had a mechanical throttle malfunction also. It was '53 Buick V8 with a Dynaflow automatic transmission. Somehow an acceleration attempt over compressed a worn motor mount to the extent that the engine torque rotated the engine block, relative to the engine compartment, beyond design tolerance for the integrity of the totally mechanical carburetor linkage and it jammed, wide open. I quickly turned off the key, which brought me to problem number two, no power steering and I was on a winding road and had to turn the ignition back on to steer. A fortunate section of straight road allowed me to kill the engine and bring it to a safe stop.
I have analyzed systems many times, where a minor, unnoticed software error could prove catastrophic. In one particular case, the probability of an incremental error happening was extremely small. However, the cumulative effect over a long period of time eventually proved catastrophic to the system's operation. The company had spend hundreds of thousands of dollars testing, in an attempt to figure out what was happening. I was brought in as an independent cousultant for another purpose, but needed to examine the relevant code to do my performance analysis. In my final report, I mention the errors I had found in the code, and their potential impact. The rresponse was, "OMG, we've been trying for months to figure out what was going wrong." I billed them heavily that month and they gratefully paid it.
For all this talk of unuusal software failures, one should recall that broken throttle return springs on a mechanical linkage are by no means unheard of. I've had one, several of my friends have, back in the 70s and 80s.
All this talk of "sudden acceleration" and the ensuing potential carnage is really a comment on appropriate driver training. A 70s muscle car going wide open throttle is going to accelerate pretty quickly, and you don't have the saving grace of a modern rev limiter, so the "shift to neutral" aspect is likely to wind up wiht engine damage.
Back then, the argument for neutral was "you'll lose your power steerng and power brakes" both of which are bogus. You're not going to be parallel parking, you're steering to the side of the road, and the turning effort isn't all that high when the power steering pump is off: People do have the belt fail or the pump fail, and nobody leaps up to sue because the car was uncontrollable. Likewise, power brakes are an assist, and vacuum operated. With the engine at WOT, there's not much vacuum anyway. In any case, you can still stomp on the brake and bring the car to a stop.
The real reason for the shift to tneutral advice is because people would turn the switch to the LOCK position and remove the key, and then be unable to turn the steering wheel.
In any case, this is just a matter of appropriate training. Everyone should have the instructor turn the engine off while driving, so they know what it's like. My driving instructor did this, my father did it.
Now lets talk about "I stood on the brake and it didn't stop". There are lots of documented cases where drivers have believed they were on the brake, and were on the gas instead. But leaving that aside, there is no car made where the brakes cannot overpower the engine, IF and only IF, you apply the brakes hard. Again this comes back to trianing. if you try to "ride the brake" to slow down (as opposed to stop), yes, you will overheat the brakes and they lose their effectiveness.
The discussion on this story is remarkable in that apparently only a couple of commenters know that the brakes will stop the car regardless of what the engine is doing. When the plaintiff in this case claims that she was mashing the brakes to the floor but the car was still accelerating, but the brakes were then found to be fine after the incident, there is a greater than 99.9% probablity she was mistaken or simply lying.
Go back and review the claims and cases during the Audi 5000 "sudden acceleration" era. Such as the driving instructor who swore he was on the brakes, meanwhile witnesses behind the car saw no brake lights and no brake failure was found in the car afterward. Or the elderly man who was sure his foot was on the brake as he crashed through a concrete barrier in a parking garage. Sure until he looked down and saw his foot on the gas, that is. At least he was honest.
Then there was the Audi dealership owner John Morzenti, who challenged CBS/60 Minutes (Remember their show on the Audi where they failed to disclose how they tampered with the Audi's engine, and also did not apply the brake?) to a 1 million dollar bet. He said they could do anything they wanted with the engine, as long as he got to sit inside with his foot on the brake pedal. CBS did not take the bet.
"Sudden acceleration" was investigated hundreds of times in the past, and other than some minor sticky throttles (Which would not have caused the wild claims made by the drivers.) no major problems with the cars were found. That's why in this case the plaintiff's attorneys had to come up with a new strategy focused on software, and of course that strategy had to include that the black box could be incorrect. Given a typical jury of non technical people, a strategy such as this had a good chance of success. How is the average non technical person going to have any clue whether a "software expert" is right, wrong, clueless, honest, or dishonest?? Especially since people generally want to believe in other people telling a heart wrenching story, and even more especially when they are up against a large "evil" corporation.
Kudos to the people asking for the experts probablity calculations, and to Bert for pointing out how bad of an idea it is to brake with your left foot and push the gas pedal with your right. I've seen quite a few people driving this way, and I'm sure they were convinced they weren't riding the brake pedal, even though from behind I could see the brake lights remain on as they accelerated away from a stop. Of course not all 2 foot drivers will ride the brake, but being in the habit of using different feet for the brake and gas pedals will absolutely increase the likelihood of pedal errors. Stick drivers of course have to use their left foot for the clutch, but this is not a problem. If you "accidentally" mash the clutch to the floor in a panic it of course will not cause the vehicle to accelerate.
In my own experience I was driving a turbocharged Isuzu back in the late 80's. I confess that I frequently would floor the throttle to accelerate quickly. One day I did this and the throttle remained stuck after I let off the pedal. Instead of panicking, I tapped the pedal hard a couple of times trying to free what I thought was stuck throttle cable. When that didn't work I put my foot on the brake. While my stopping distance was of course a bit longer, the car did slow down to where I could easily pull off onto a side road and then easily come to a full stop. The turbo engine was wailing away, but no match for the brakes. I put the car in neutral, and then looked down to see that the very stiff floor mat was holding the gas pedal to the floor. Easy fix.
I suggest anyone who doubts the brakes find a stretch of open road, or a very large parking lot and try the test themselves. In this case do use both feet. From a stop, floor the brake with your left foot, and then try applying more and more throttle with your right. If your brakes are in proper working order the car will not be going anywhere even if you floor the gas pedal. Automatic transmissions only, of course.
But we are not trained or naturally do the correct thing when the mechanical controls that extend our senors and actuators beyond our bodies STOP WORKING AS EXPECTED the brain is running engramless, that is the brain has no textbook answer handy when the car wants to keep accelerating to the moon.
not just loss of accelerator but the worst possible kind of malfunction, full power.
I looked at these electronic foot pedal hardware and saftware packages for a school project for challenge X.
I was appalled no safety standards existed for the hardware or software for the industry, no oversight or help with safety design of these critical hardware components for the schools either.
Did not get involved further because of this fact alone...a disaster in the making.
Many solutions some or all should be used.
avionics level software and hardware certification for power control and user interface devices.
A simple big red pull pin or pushbutton override
Add drivers license training for stuck throttle driver response training.
One assumes that the hardware and software gaps identified in this ECU implementation is sobering for anyone implementing driver assistance systems, drive by wire controls as well as those pushing fully autonomous cars.
Most automobiles today have multiple cpu's communicating over non-redundant networks with a non-zero error rates; as the complexities of control increase in drivers assisted, drive by wire and autonomous vehicles the ability to fully test and verify the implementation reduces. Even though the software and hardware may be designed to meet strict standards, the standard themselves have limitations and are open to some interpretation.
My major concern is that automation systems built by multiple organizations have an almost zero chance of reaching the same decision in the same time in any given emergency (and faults are just another emergency) event. Putting lots of these systems in close proximity in high speed flowing traffic may be a classical Butterfly Effect when things do fowl up.
We have so many things to fix to make these systems safe, and it is not helped by the performance achieved by some vehicles. Who on earth needs 0-60 in under 4 seconds where unintended acceleration means the vehicle may be pulling 0.7g from a standing start. With human perception/reaction times in the 0.5 seconds range at best (for a fully attentive driver to brake as here: http://www.ecu.edu/cs-dhs/ot/upload/AOTA_Brake_Reaction_Poster.pdf ) the vehicle moved about 11ft before you had any chance to react. (Just as disconcerting and deadly would be un-intended braking at highway speeds on those around your vehicle).
Big red buttons for emergency disconnects, brakes expected to dissipate the full engine power or any other human activated device you care to describe will not help. The reaction time of the automation will always be far shorter than any human reaction, so putting reliance on the human to be the arbiter in case of emergencies IMO verges on asinine.
JCreasey, Autonomous vehicles can actually make the problem less difficult, not in overall complexity, but oversight of the situation, situational awareness. In the Toyota scenario that we're discussing there is no way to independently judge intent, or consequences,
I totally agree with you Les. My point was that any system where the human is in the loop as an arbiter or safety responder is problematic not that automation would not work.
If totally autonomous vehicles are the solution, then IMO there should be a central automation system with the cars as clients to it (V2I), not millions of standalone compute islands and certainly not island to island (V2V mesh).
With today's drive by wire we have the technology in place in many vehicles to centralize control instead of the island based designs like the Google car. It would be cheaper and IMO more reliable to enlist in a central controller than try to be standalone or co-operative with island neighbors.
While lots of work (compute island) tackles the problem of seeing the defined for a human driver environment (lanes, signs, other vehicles etc), a central system infrastructure (viewed from the static road sensor positions) has that knowledge inbuilt (programmed). There is no need for lanes, signs, traffic lights etc.
JCreasey, this whole thing is complicated in that the cost of vehicle controls, infrastructure and public acceptance are all huge issues. It won't all happen at once. There will be a mix of vehicles with various capabilities and drivers with varying responsibilities, skills and alertness. However, I am confident that the more automation here, the safer the roads will be.
Even where the infrastructure mostly commands, or directs the vehicle, there will still be a need for someone, or something, to drive the car in case there is a communication failure.
"Even where the infrastructure mostly commands, or directs the vehicle, there will still be a need for someone, or something, to drive the car in case there is a communication failure."
This goes to the very root of Toyota's current problems. It is very difficult to ensure that the firmware running the car is totally safe, and in a drive by wire system a breakdown in communications within the system may render it undrivable by a human. The computer(s) is in control, you may have no direct human control ability at all.
Unless the Throttle, brakes, steering, and engine control have mechanical linkages, there is no reliable possibility of human as intervention or backup control for failures. You either automate or stay manual.
In the case of failure in the V2I, an automated vehicle would slow down and stop using local sensors. The infrastructure knows it just lost communications with a client (hearbeat) and can move surrounding traffic out of the way (slow down and move aside).
"Unless the Throttle, brakes, steering, and engine control have mechanical linkages, there is no reliable possibility of human as intervention or backup control for failures. You either automate or stay manual."
It looks like the trend is definitely going away from manual control and toward some sort of automation. The accelerator pedal cannot directly control anything. It HAS to see the right foot as just one of the parameters that go into control decisions. There are advantages to making other controls such as steering and brakes to be mostly suggestions as to intent. That doesn't mean that there can't be some looser driver control in the event of a degraded system. Certainly, as has been suggested, tapping the brake pedal should kill a runaway throttle.
I believe the Toyota problem is one of inadequate design and testing. I'm sure we will ultimately learn much from this. There are problems with technology but auto safety looks pretty good. There are a lot more factors than electronic control. If you go back 50 years to when there was only automated shifting you will realize that modern cars are much safer. Absolute perfection of control would nowhere near compensate for the poor state of tires, brakes, suspension, and body structure that we faced then. And... the best tires, brakes and suspension are made even more effective with the application of some sensors, processing power and various actuator mechanisms. There's no turning back.
I pretty much agree with your last paragraph but this must be seen as being able to operate in a heterogeneous environment, not just with vehicles that are pretty much at the command of the infrastructure.
(Just as disconcerting and deadly would be un-intended braking at highway speeds on those around your vehicle).
I have to disagree with this one, we should keep at least 3 seconds to the car in front and if it's a long drive and we're likely to lose concentration then 6 seconds, giving ample time to react in an emergency. I drive with the cruise control on on the highway and have me foot over the brake just in case. I've missed kangaroos and wombats and birds in flight by being alert and watching the road ahead. You should be able to stop before hitting the car in front even in a panic stop or you're driving to close.
On the common theme of this article, (not in response to your comments) sadly automotive electronics is designed to a cost in tight competition with other suppliers with the winning bit being over as little as 50c (I worked for Delco Electronics for a number of years, and this is based on actual experience) so for something to be less than ideal is expected. I think there should only be large payouts for gross negligence. I don't have enough info to opine as to whether Toyota met this criteria, but erally if we want drive by wire and steer by wire then the design rigour must be more in tune with the aircraft industry even if it means that the drive by wire system in car 'X' is a $50,000 option, none of this $500 dollar option because we got it for 10c extra in the competitive bid process. If we want real solutions we need to start paying real prices for them.
@Etmax. Ah good old Aussie roads with the best potholes in the world (I'm an Aussie too).
At 60mph (88fps) you would maintain a minimum distance of 264ft on a US freeway ...and you'd like to have 528ft? mmm, I'd like to see you do that in peak hour on one of Sydney's or Melbourne's freeways.
Choices on car spacing aside, cars decelerate at different rates and with ABS essentially universal your "emergency" braking is limited to what the system allows in wheel speed differences. As we move to drive by wire, brake pressure is fully software controlled. A fault in the braking system might result in anything from a nice controlled maximum –g stop to a four wheels locked monster with allied loss of directional control.
As in UA, this unintended braking (under fault conditions) might happen at any time, and IMO would be just as hard to have the human cope with. At least with UA, the vehicle is moving in the direction you are looking: with unintended braking the danger front is both in front and behind you.
As a last comment, as we move to more automation in cars, the spacing between cars may be set in software based on in car sensors. Manufacturers are playing with "platooning" of cars on the freeway with distances of about 20ft at 60mph between 10- 20 cars (Mercedes and Volvo seem to have the lead here). http://www.newscientist.com/article/dn22272-out-of-control-driving-in-a-platoon-of-handsfree-cars.html#.Um_g7HCsi-0 While this is an island/V2V system and I don't really agree with it, this does get the traffic density up on the freeway, and should be valuable providing interaction between multiple platoons can be controlled. Platooning does however clearly show that if one vehicle in the platoon has an "unintended X" failure, it will be challenging to prevent V2V contact.
At 60mph (88fps) you would maintain a minimum distance of 264ft on a US freeway ...and you'd like to have 528ft? mmm, I'd like to see you do that in peak hour on one of Sydney's or Melbourne's freeways.
I've never been on a Melbourne freeway in peak hour where the speed get much above 60kph let alone 60mph. :-) (you new that was coming right? :-)
Platooning is an interesting one, it would avoid the cycling between 0kph and 60kph that occurs at various intervals on freeways, but as you say with no margin for error, and who is responsible for the collision and possible death? Toyota paid dearly here, and while most drivers can't cope with their car in working order they have little chance to cope with an X-failure in platooning, the car makers will likely be sued out of existence in that case.
People just like to blame someone, and usually the one with the deepest pockets rather than the one at fault.
@JCreasy I agree, proactive recalls are much better than after the fact. I heard from amate that BMW had issues in the US that they didn't want to recall, and Mazda here did something similar. Buyers should vote with their feet, not necessarily for the car with the least problems but rather the best after sales care
1. A car's brakes in proper working order WILL stop the car under full throttle acceleration, whether you think it's a good idea or not.
2. There are only a handful of cars on the road that do zero to sixty in 6 seconds or less, let alone the 4 seconds you state.
3. Many attentive drivers will have reaction times less than 0.5 seconds.
4. Meanwhile inattentive/unskilled/intoxicated/elderly drivers may have reaction times in the seconds, and their reactions may also be so poor that their initial non reaction is more favorable than the results afterward.
Item 4 does not mean good drivers should not have manual mechanisms to enhance safety, although it is a strong argument for self driving cars and graduated licensing. Automated safety systems such as radar are already prevalent, and self driving cards are in test in mulptiple locations. Better get ready to face your fears.
With the numbers of miles cars are driven and the large number of engine cylinder operations per mile (say 12,000 for a 6 cylinder car being driven at 60 mph at 2,000 rpm), low probability problems are likely to surface. That said, in electrically noisy environments with connectors subjected to adverse conditions and wires flexing, the possibility that a bit might get flipped doesn't seem surprising. I guess the question becomes: what processes enable engines to quickly return to proper operating modes when errors are detected? I know for sure that problem hasn't yet been solved on my home computer.
"........in electrically noisy environments with connectors subjected to adverse conditions and wires flexing, the possibility that a bit might get flipped doesn't seem surprising......."
You are right to draw attention to the potential problems caused by poor electrical contacts particularly in connectors. These problems are exacerbated by the use of the vehicle body as a ground return for electrical circuits.
There is a multitude of electrical connectors in the modern automobile, each with a pair of vulnerable electrical contacts. For example an engine ECU may have upwards of 50 connectors. Designers in the automobile industry carry out exhaustive Failure Modes and Effects Analysis (FMEA) on components, sub-systems and systems and use this as a basis for the design of fault detection software. Some manufacturers carry out a 'PIN FMEA' for each electronic control unit and its associated wiring harness that lists the potential failure modes of the circuit connected to each pin and the possible resultant effects in terms of system performance. In the case of sensors, the 'PIN FMEA' covers the failure modes of the entire sensor loop. This useful approach to identifying potential problems is deficient in two respects:
1 The failure modes are identified and treated "one at a time", whereas in practice, as far as multi-pin connectors are concerned, common mode failures affecting several pins may occur more or less simultaneously. For example, although the likelihood of two sensors failing simultaneously may appear to be very small, should a multi-pin ECU connector come loose, the likelihood that several sensor circuits may be simultaneously become intermittent is quite high.
2 The FMEA method as presently implemented does not sufficiently recognise and deal with short duration dynamic intermittent faults. Faults are considered as if they will be open or short circuits and the intermediate situation where there are short duration intermittencies are not taken sufficiently into account. Intermittent contact faults in low-current sensor circuits excited by mechanical vibration will make a circuit noisy but, since the average circuit parameters may still remain within the bounds of what is deemed "normal", Electronic Control Unit (ECU) software designed to detect hard faults will not necessarily trigger diagnostic trouble codes (DTCs).
Electrical intermittencies may take many different forms, some of which might be very difficult to locate and confirm in a normal automobile servicing environment. Road-induced shocks and mechanical vibrations induced by the engine and transmission will stress potential points of electrical intermittency simultaneously.
Electrical contacts subject to vibration may become microphonic, as in the carbon microphone. Battery and ground terminals can become loose giving the potential for the generation of large transient voltages on the the DC power bus. Sensor connectors can become intermittent resulting in false speed signals and false accelerometer readings.
I don't understand how the obvious seems to be missing. No electronics can ever be 100% fail-safe because there will always be failures either in code or hardware. We know that, and it shouldn't be difficult to provide an external mechanism that will return the accelerator to idle if the brake pedal is used.
No-one (sensible) will need to use acellerator and brake at the same time during highway driving, and it's normally expected that the same foot is used for both, and by design they are not suppoed to be used together.
An external or separate micro-controller can easily sense that road speed is above a preset threshold, engine revs likewise, and the brake is applied. This can then be used as an override that forces the throttle back to idle, disconnecting the ECU if needs be.
This arrangement could quite easily still allow 'heel-and-toe' operation for hill starts with a manual transmission (does anyone still do that?). At the same time, simple sensing would activate the separate micro if any of the defined criteria were met.
So, if road speed and/or engine revs are above preset limits, the throttle is open (or open beyond a 'reasonable' limit) and the brake is applied, the micro takes over and returns the throttle to idle or kills the engine completely. Normal human reaction is all that's needed to get the car under control.
Normal driving is unlikely to trigger the event because most people only have one right foot. Is this idea too simple?
"No-one (sensible) will need to use acellerator and brake at the same time during highway driving, and it's normally expected that the same foot is used for both, and by design they are not suppoed to be used together."
My father used his right foot for the accelerator and his left for the brake. It was fairly common in his day.
"My father used his right foot for the accelerator and his left for the brake. It was fairly common in his day."
Might be common, but it's a really bad idea. The worst example of this is people who actually keep their left foot on the brake pedal, while driving. This risks dragging the brakes while you're driving, which will overheat the brake fluid, aside from wasting energy, brake linings, overheating and probably warping rotors, and keeping the brakes lights on so drivers behind you can't figure out what you're doing (added as the last bad effet, because it is the least destructive).
I think that treating the simultaneous application of brakes and throttle as an error condition is a great idea, myself, and it is consistent with the way cruise control works as well. Plus, it would cure drivers of that bad habit in a hurry!
I use my left foot for the brake when I drive an automatic shift car. Nothing wrong with it at all, and you can gain a fraction of a second in braking response time. And no, I don't ride the brake normally, I am not an idiot.
Before power brakes and automatic transmissions, using the right foot for the gas pedal (a "light" touch) and the left foot for the clutch and the brake (a "heavy" touch) made good sense. Obviously riding the clutch or the brake caused undesirable wear and was avoided.
"Before power brakes and automatic transmissions, using the right foot for the gas pedal (a 'light' touch) and the left foot for the clutch and the brake (a 'heavy' touch) made good sense."
This is getting rather tangential, but it seems to me that when driving a stick, you have to be able to press both the clutch and the brake together, although not exactly simultaneously. Therefore, it's practically impossible to use the same foot for both.
Slow down for a red light. You lift your foot off the accelerator. Perhaps you downshift for some engine braking. You start applying the brakes. As the car slows down, clutch still engaged, you will have to push in the clutch to keep the engine from stalling, as the car comes to a stop. Meanwhile, your right foot has been braking all along.
Or, slowing down for a tight turn. Foot off the accelerator, you brake gently with your right foot, then push in the lcutch to downshift, release the clutch while still braking gently, and then accelerate out of the curve. Sill pretty hard to do with just one foot.
Honestly, I see no good reason for pushing the accelerator and the brake at the same time, unless you're a teenager looking to spin the wheels when the light turns green, and still too clueless to understand the damage you're doing to dad's car.
Manual transmission on a steep hill. You need to transition from a stop to moving. Speed from brake to accel is too slow to keep from stalling. What do you do? Hit the brake and accelerator and the same time then transition from brake to accelerator. Why not use the parking brake? Some cars have foot actuated parking brakes and you already have a problem of not having enough feet. . .
"Manual transmission on a steep hill. You need to transition from a stop to moving. Speed from brake to accel is too slow to keep from stalling. What do you do?"
I agree with the "not enough feet" scenario. Although I'm not usually worried about stalling, as much as I'm worried about frying the clutch!
Yes, I too apply the hand brake while moving the right foot from brake to throttle. A foot-actuated (and foot-deactuated) parking brake makes this technique impossible, in a stick shift. So, you either learn to drive more skillfully, or you buy an automatic.
It's quite difficult, in most stick shift cars, to apply brakes and throttle at the same time (aside from a hand brake), although if you have a reasonably wide foot and the pedals are positioned just right, it can be done. Still, for a regular stick shift car or for automatics, having the brake pedal override any throttle command seems easy and fool proof enough. The hand brake is mechanical, cable-operated, and best kept out of the throttle safety logic, IMO. For one thing, in my experience anyway, hand brakes are hardly adequate as any sort of safety device while the car is moving. They aren't close to effective enough to overpower an engine at full throttle.
As the resident test & measurement editor, I must ask, how do we know what caused the flipped bit? Was it caused by a glitch resulting from noise? Was it purely software initated? Is the condition repeatable enough to determine the root cause?
MeasurementBlues, I thought the article implied the memory corruption "may" have caused the bit flip. Given all that I read in the article it makes me quite concerned about self driving cars. I hope that there will be standards employed simular to the FDA's life critical devices. With a little (very little) experience with fail safe coding and hardware design it seems obvious to me that cabling could fail in many ways. Cable signal design should have provided for an easy means of detection of a single or multiple line cable falure, sort of like the old active low signals with pull ups for backplanes. It is important to keep in mind the technical challenges involved in coding but I wonder if there should be an electronic override feature that provides either a fresh reload of code (if it is possible to do safely - I don't know what the reload/power up looks like) or a fully parrallel "simple" processor to allow for "direct" user control with minimal bells and features.. Just thinking that if nothing else, being able to TAKE back control in a manual as possible means would be at least reasurring.
@RoboticsDeveloper, good to hear from you again. "Given all that I read in the article it makes me quite concerned about self driving cars."
The lawyers must be salivating at the thought of self-driving cars. Accidents will occur even then, and there will be no driver error as the cause. The blame will fall to the auto makers, designers of the roads, municipalities of these raids are not properly maintained, and so on.
Back@ MeasurementBlues, I can only imagine the lawsuits, the costs, the huge money (for the lawyers!) given the fact that it will be companies being targeted for the fault. What about the car service people? If they did not "properly" check out the operation of the vehicles electronics at the last service then they could be liable as well. Just think what that would cost everyone if all the service folks needed insurance to protect themselves from lawsuits and the added cost of new tests/equipement..
Robotics Developer, Autonomous vehicles can actually make the problem less difficult, not in overall complexity, but oversight of the situation, situational awareness. In the Toyota scenario that we're discussing there is no way to independently judge intent, or consequences.
Les Slater, I am not sure how an autonomous driven car makes the problem less difficult. Given all the variables with roads (conditions, car state of operation, other vehicles, etc.) there is just so many complications to account for that I would be very surprised if they covered all the bases. Given the huge task and the possible failures of systems/subsystems what is the fallback for the "passengers"? How/when would they be able or know to take over? It sort of bogles the mind - all the possibilities. I have driven robots both with drive assist and with full manual - drive assist really helps but if there is a sensor fault it does not take long to get into trouble even at 15 ft/sec, I can't imagine what would happen at highway speeds. I am sure that the technical challenges can be solved but would really want to see a lot more testing, standards, and safety features before I would "get behind the wheel" of an autonomous car.
But to the Toyota case I was troubled by the lack of driver control over the electronics given the systems set up as they were. I would not want any system to override a desire to stop. There should have been a means to prevent runaway situations if nothing else but to stop motion if there is a difference between gas and brake.. just a thought.. Intent is hard to know for sure I agree, but if the black box was able to robustly determine if the gas was pressed and/or the brake then maybe intent would have been easier to determine.
The 800 page report, in redacted form, was filed in U.S . District Court in Santa Ana, CA in St. John v Toyota on April 12, 2013. I don't have it; I am contacting the court if this is available. Meanwhile, unredacted is only in the code room and in a few lawyers' hands, according to those involved in the investigation.
Was the rate of acceleration was specified in any of the reports both the agencies had provided. Or in the event of accident it was found that how much approximate was the rate at which the car had accelerated?
Rich Pell, that was what I read into that statement. What worries me more is that it was possible to record false data in the first place. That seems to be a failure in the design that should have been caught early in the design review process. All that said, I wonder how many drivers have been wrongly accused of being the cause when the Blackbox data is used and treated like it is an impartial data collection means??? Makes me wonder, for example: jury members for this trial NEEDED to have some technical understanding / discernement otherwise how could they come to the right conclusion? If my dad had been on the jury most if not all of this would have been quite over his head. This aspect of the trial I find very interesting and I wonder what the jury selection process entailed.
"I wonder how many drivers have been wrongly accused of being the cause when the Blackbox data is used and treated like it is an impartial data collection means???"
This is why having some idea of the probability of such errors is so important. Here it seems that the jury concluded that not only did a throttle fail-safe error occur but that also the car's EDR failed to record events properly. What is the likelihood of this scenario compared to that of a human error-caused unintended acceleration - an event that is known to be not uncommon, especially among older drivers?
Rich Pell, I agree fully with your assesment! The likelyhood of a car vs driver mistake is widely different. On both ends of the spectrum: very old and very young drivers can make mistakes. I would like to see more cars with the collision avoidance electronics as a means of preventing some crashes. I know that these cost money but I wonder if insurance company discounts would help offset the additional cost for these features?
I have gone through that court testimony doc at embeddedgurus.com. It is not only interesting, but can be considered as a lesson if you are in the business of designing safety critical hardware/firmware.
The following is an excerpt from that pdf. It is an answer given by Mr. Barr when asked about how they had access to the toyota source code.
"That experts see source code is not unusual, but the protections around this source code are certainly unusual in my experience. The source code review involves looking at electronic documents on computers. There is basically a room the size of a small hotel room that is disconnected from the Internet, no cell phones allowed inside or would work inside. In that room there is about five computers and some cubicles. In there, it is possible to believe view on the computer screen Toyota's source code. We couldn't take any paper in, take any paper out, couldn't wear belts, watches. There was a guard. It was worse than airport security was on the way here. Each time in and out, even to go to the bathroom."
Perhaps I've watched too many TV legal dramas. When expert witnesses start heaping up evidence on the plaintiff's side, sometimes it seems overdone.
In this case, the fact that a zillion potential issues with the throttle algorithm were uncovered, even though none of them was actually determined to be the cause, nor was their probability of occurence mentioned, and further that it was shown that the black box may also be lying at the same time, seems a bit like "stacking the deck."
I suppose the intent was to absolve the driver from any possible responsibility, because she evidently hadn't applied the brakes? Like I said, probably too many TV dramas.
Aside from that, it certainly makes sense to have the brake pedal take precedence over any throttle control signal. I can't imagine a proper autonomous vehicle NOT implementing that same logic. Any braking command automatically overrides any acceleration command. Simply because, in the majority of major system failure scenarios, cars are better off stopped (hopefully on the side of the road). It's the most resonable fail safe mode.
Hi, Bert. I appreciate a level of skepticism...but let's get too cynical before we know all the facts.
Actually, I find the fact that the experts' group was able to demonstrate at least one way for the software to cause unintended acceleration is a "breakthrough," at a time when the Toyota case -- up until last week -- was viewed by many as an issue of floor mat, sticky pedal or a driver's error.
Sorry to keep putting you on the spot, but this is important stuff.
I'm reading the court transcript, and it's clear that during the trial, Michael Barr had a presentation prepared with visual aids to walk the (most non-technical) jury through the findings.
In the U.S., the court system is open, correct? In other words, isn't everything, including the testimony, public and available, unless the courtroom is cleared?
I don't mean to give you (Junko) homework, but I'm sure every reader & commenter here would love to see the presentation and the same things the jury saw. We love to analyze, understand, learn. Is there any reason to expect we won't be able to see this?
The court transcripts are publicly available. I will post the URL and parse out relevant parts in a separate story. Unfortunately, the slides Mr. Barr presented ate during the trial, however, is not publicly available.
As a lifelong gear head, I know that worn or broken parts can kill you, but I don't believe claims about unintended acceleration, given functional mechanical pedals, linkages, etc.
I developed a verification and testing process for a firm that developed embedded engine controllers and this all sounds familiar. I'd been dubious about the Toyota failures, but I didn't realize that this car was drive by wire. Buggy software as the root cause of the failure mode is therefore completely plausible, despite no finding of mechanical or electronic failures.
If Barr's report is accurate, the software design, programming, and testing was ignorant, sloppy, and inadequate. The real shame is that this is completely unnecessary – we've known how to achieve very high reliability software systems for a long time without breaking the bank. Model-based testing is now a big part of that.
I'm not sure who's responsible for the hype and inflammatory language ("a single bit flip could...," task death, dead task, dead app), but I guess that's what you have to do to make software failures tangible to a jury. It is interesting no smoking gun is reported (recorded input/state with incorrect output that directly caused the failure - i.e., it is not correct to say that a single bit flip caused the failure.) In a tort case, circumstantial evidence can be sufficient, so it seems that evidence of poor software development alone was enough to convince the jury that it probably caused the failure.
This may be the first time that indicators of bad code (not actual results) were sufficient to get a judgement. If so, I hope this is a wake up call for people who manage this kind of system development and its risks: software hygine isn't a fool's errand.
As a complete departure from the current approach to automotive safety system inprovement by computerisation, I would like to pose the following question?
Why are regular automobiles designed and built with the capabilty to move at speeds that all agree can cause lethal harm? If the engines were all "governed down" there may well be no need for most of the layers of safety system architechture in the first place!
The presence of speed limit laws in all nations is admission that we all know what the dominant risk factor is. So why not fix it at the source?
@Mervynrs, an intersting argument -- sort of coming from the left field.
And yet, somehow I don't agree that the "speed" is the key reason for safety issues of cars. A growing list of all the bells and whistles now added to cars seems to be the cultprit in my mind, although some of those new features are being developed for safety reasons.
There was 150 feet of skid marks from the plaintiff's tires. This was a MAJOR part of the trial, emphasized by Jere Beasley, founder of the plaintiffs' law firm, in a YouTube video accusing Toyota of a cover-up.
I commend EE Times for following the sudden unintended acceleration issue. Junko Yoshida summed things up quite well in her reply to one of the comments:
"...the fact that the experts' group was able to demonstrate at least one way for the software to cause unintended acceleration is a "breakthrough," at a time when the Toyota case -- up until last week -- was viewed by many as an issue of floor mat, sticky pedal or a driver's error."
The Oklahoma case is now being referred to as a "landmark," underscored by Toyota's shift to "settlement mode" after the verdict was in. Not only for the Oklahoma case, but for all of the remaining sudden unintended acceleration cases, and now there are reports that Toyota is also interested in "settling" (for about a billion bucks) the two-year-old federal criminal investigation as to how complaints of sudden unintended acceleration were handled (along with a few other niceties such as possible mail fraud, wire fraud, lying to Congress, and misleading stockholders).
Actions speak louder than words, and I won't belabor the notion of anyone being allowed to buy their way out of a criminal investigation.
I was always suspicious of "NASA's" report on the Toyota acceleration study and the assumption that a NASA evaluation is irrefutable. Not all NASA scientists and engineers are of the same caliber. I doubt that the programmers that worked on the deep space probes where code has to be perfect, were the ones that reviewed the Toyota code. I also wonder if the black boxes now placed in automobiles have independent sensors for operation and black box logging. (I suspect they were looking at the same incorrect sensor input with the same resulting conclusions.) We routinely reboot our PC's, printers, cell phones, game boxes, and cable/satelite receiver boxes with little consequences. However when software controlls peripherals that can affect lives, extra care has to be taken and reviewed by programmers not involved in writing the code.
This situation is nothing new. Look up "10 historical software bugs with extreme consequences" and "A collection of well-known software failures". Not admitting that a serious mistake was made is the real tragedy.
Right on, pbenjamins. NHTSA broadcasted the big lie that NASA had ruled out any electronic issues, knowing full well that NASA had done no such thing, that the space agency never said it had, and that NASA was hamstrung from the get go, complete with time limitations and incorrect information from Toyota. Embedded systems expert Michael Barr has set the record straight, and driving the point home, NASA physicist Henning Leidecker is now warning of increased risk of unintended acceleration in '02-'06 Camrys due to "tin whiskers" growing in the pedal sensors.
My Mom the Radio Star Max MaxfieldPost a comment I've said it before and I'll say it again -- it's a funny old world when you come to think about it. Last Friday lunchtime, for example, I received an email from Tim Levell, the editor for ...
A Book For All Reasons Bernard Cole1 Comment Robert Oshana's recent book "Software Engineering for Embedded Systems (Newnes/Elsevier)," written and edited with Mark Kraeling, is a 'book for all reasons.' At almost 1,200 pages, it ...