1. Complex software always has bugs, even latent ones that might rarely if ever show up.
2. No matter how imaginative the team is, they will never be able to think of all those bugs. Some bugs and their consequences will simply never occur to the team members.
3. Safety critical systems should follow standards, but even if they do, random events can still activate latent software bugs and take out the fail safe systems designed to protect against those latent bugs.
It is possible to reduce errors to effectively zero, but it is very hard. Complex will always be less reliable than simple, unless that complexity is focused on reliability (for example, overlapped and crosschecked operation of independent systems). The system (hardware and software) has to be independently verified, since developers have blind spots around their own work. The real issue is that it can't be rushed. Making reliability trump schedule would avoid many problems of this type, but especially recently that is a hard case for engineers to make to management.
Infact, the right question must be "Do we really need to hope anything?".
Because, even if we could write a perfect firmware that has zero bugs and failures that results in a perfect car, it ceases to be perfect until some other driver lost control of his vehicle and hits us. If so, the best answer, according to me, is to not hope perfection. The only thing we could do is to try writing better firmware that would result in better systems than that was previously.
All the ECU controlled cars has this kind of automated control with manual control provided for the driver, but we can not say that it totally manual. Where ever automation is there these kind of malfunctions are possible. We should be ready for these kind of accidents as now the time is coming up for driver-less autonomous vehicles. God knows about the dependency on machines then afterwards.
There is no perfect world, all we can do is to make it better and better over time. Complexity is a problem yes, but modular/reuse-based design, agile software development, and continuous improvements can reduce the risk of bugs dramatically.
This article (and all discussions on this subject) seem to ignore the number one safety system in place in all of these fly-by-wire cars, and that is the DRIVER.
For decades, drivers had to deal with the possibility of a stuck throttle. Were cables worry free?
In all of these supposed run-away cars, there was a transmission which could be mechanically shifted to neutral, and an independent hydraulic braking system which could stop the car, even if the engine was buzzing away at redline.
That doesn't mean the manufacturers shouldn't do their best to build a car that is bug free - but the ultimate responsibility lies with the driver. If the car does something unexpected, shift it to neutral and stop, period. If a stuck throttle is something you can't cope with, you probably shouldn't be behind the wheel of a 4,000 lb projectile.
I think with the event of electronicly controlled transmissions and brakes and a common bus that connects these to the ECU(throttle) in the event of cascading failures this may not be feasible.
I have personally experienced two ABS failures and one other failure of Mechanical brakes - I walked away from all of these by pretty much using the transmission to shift into low, combined with the emergency brake to do this. Had this mechanical backup not functioned I would likely be in much worse shape medically than I am now.
Aircraft systems typically use three independent channels be they mechanical or eletro-mechanical -- any one of which will maintain control of the aircraft in the event of the loss of the other two.
A Book For All Reasons Bernard Cole3 comments Robert Oshana's recent book "Software Engineering for Embedded Systems (Newnes/Elsevier)," written and edited with Mark Kraeling, is a 'book for all reasons.' At almost 1,200 pages, it ...