You ask for notable cases. The ones I can think of are Toyota Camry throttle control module; GM ignition switch switch-out (w/o changing part number - a BIG no-no in FAA-controlled systems). Can't think of any more just now but that covers 30M+ vehicles on the road - more than passengers carried on commercial airplanes in a couple of months - and note that the most recent fatal airplane incidents have been due to pilot error/stupidity (most flagrant is Buffalo crash where pilot ignored stick-shaker and stalled airplane - if it had been him alone it would have gone into the Darwin awards - instead he killed an airplane full of paying passengers).
It's estimated that 95% of modern electronics failures stem from bad connections, (ignition switches etc.). There was even a report from NASA that an investigated case of a Toyota unintended acceleration was caused by no-lead solder whiskers shorting out the accelerator pot (link is now dead).
It seems like the old one step forward (more reliable ICs) and two steps back (lead-free solder whiskers).
I've witnessed a great deal of garbage code (as well as iffy hardware) hit the streets based on the attitude, "well it seems to work, let's try it out in the field." I find I spend a majority of time on a development project on test procedures to fully debug performance under all conditions (I can think of). Once you have these test procedures well established they become quite efficient. Unfortunately, many consider a battery of tests a waste of time and money, preferring their customers to do the testing.
One technique that works very well for me is using a spreadsheet to test software. First I simulate every possible value and frequency of input (in and out of range), sine wave, square wave, sawtooth etc) and then convert these values to the format of the circuit inputs. The circuit outputs are then converted into spreadsheet inputs and the results compared graphically. It's like testing an amplifier with a function generator and scope. It never ceases to amaze me that the vast majority of software I've tested this way dramatically fails this test.
Another industry practice is to pitch the project over the wall to the production group and go on to a new interesting project. I prefer to follow the project throughout its life. There is no way a designer can effectively transfer his knowledge quickly to production and maintenance. Pitching projects over the wall is just asking for unneeded trouble.
Flash memory with its 10 to 20 year life expectancy, depending upon temperature is 100% effective planned obsolescence. There is no possible way that today's vehicles will still be running 50 years from now. How are you going to replace Flash, even if it is housed in today's current package, let alone inside a custom IC? That part is trivial compared to figuring how to code the new flash.
Just think, we have made it impossible for anyone to admire our work 50 years from now.
@elizabethsimon: Also, newer Flash with smaller gate size has less charge storage so the life is less
There's another big factor: most high-density flash these days is Multi-Level-Cell (MLC). I think the majority of this these days is 3-level (really misnamed as it really means 3 bits/cell, ergo 8 levels in the physical world). This cuts the noise margin dramatically, and that applies also to the effects of charge dissipation due to leakage. This applies primarily to "raw" flash; "managed flash" (like that used in SD cards etc.) is preferred for automotive applications because it 1) greatly extends the cycle life, as it detects failed cells, rotates usage to distribute wear evenly among all the cells, and jsut like HDDs do, takes failed cells/blocks out of service permanently. 2) helps with issues like thermal/leakage degradation using the same error detection and remapping algorithms as #1.
My previous employer (a large Japanese company who is a major OEM subsystem supplier to Toyota among others) was very much opposed to any use of MLC flash during my tenure there, insisting that even SLC required extreme measures to prevent problems.
IIRC, the old rule of thumb was leakage would increase by an order of magnitude for each 10 degrees C temperature increase. With underhood temperature rise of 30 C or more, that's a BIG increase in leakage!
@tb100 I thought the Flash degradation was caused by lots of erase/write cycles, as often happens on memory cards in tablets and smartphones. Data retention time with no memory writes should be 20 years, depending on which memory chip you use.
For Flash that is written often, the lifetime is generally limited by the degradation due to erase/write cycles. This is due to excess charge storage in the gate.
For Flash that is written infrequently, the lifetime is limited by the the rate that the charge leaks from the gate. Which is where the 20 years comes from. Unfortunately, the rate of charge migration increases significantly with temperature so is likely to be less under the hood. Also, newer Flash with smaller gate size has less charge storage so the life is less.
Ignition switch failures are deja vu all over again - late '40s and early '50s GM vechicles had an ignition switch prone to failure due to keyring overload (i.e., torque due to vehicle movement). I saw a number of these things and there was even an article in Popular Mechanics 'Gus Wilsons Model Garage' series where a heavy woman and lighter man traded places between the driver's seat and the passenger seat - when the light man drove, the car quit, when the heavy woman drove the car worked fine.
In the ealy '70s I encountered several GM vechiles with this switch problem - back then the fix was to install an after-market ignition switch and restore domestic bliss.
These days that's not an alternative - with key-coded switches and steering wheel locks after market switches are about as useful as a goat-powered methane digester (which I have also seen - you needed a lot of goats).
As vehicles get to depend on more electronic systems the risk of a minor failure causing a fatal accident becomes greater. A toothless Federal agency - NHTSA - can't manage these problems in the same way as an agency like the FAA or its enforcement arms can. It's time for automitive software to be certified by an independent agency with real teeth - prove it works or don't put it on the road.
People will howl about inhibiiton of invention or creative process - I guess I would rather not be a victim of a creative process gone wrong and wait a couple more years for the latest and greatest car - auto companies can figure this out, they don't need to turn to lobbyists, just listen to their engineers - and spend money on the engineeing/QA department instead of putting more lobbyists in Washington.
re: "My bet is that in spite of the hyped up recalls lately, the result would be that cars have far, far fewer of these problem than they used to"
This is so true in my experience. Since the advent of the ECM, I haven't had a car experience vapor lock on a hot day or high altitude. I haven't had a car not start except for the few times I've left my lights on. I haven't had to adjust the mixture on a carburator. I could go on.
It's not just in the elctronics. Metalurgy and design have improved greatly as well. 75,000 miles used to be old for a car. Now, I'd feel cheeted if the car wasn't running strong at twice that mileage.
The down side is that, rare though it may be, there are a larger number of expensive items to go wrong these days.
Lessons learned? Although consumers have always had to keep an eye out for poorly designed cars (and companies with bad reputations), having more electronics in the car means consumers have to stay more vigilant.
I'm don't think this describes what's going on. I think the truth is, the more systems you design into a car, or into anything else, the more fault modes you will potentially create. The net effect on safety and quality can still be very positive.
All of those air bag recalls are the perfect example. Air bags started showing up as optional equipment in the 1970s and 1980s, and they became mandatory in the US in 1998. So yes, over time, experience will no doubt uncover defects or glitches, and recalls will be issued, but does that mean that today's cars are less well built than those of the 1970s? Far from it.
The other thing is, I get the impression that at GM, Mary Barra is "cleaning up" issues that she thinks were left hanging far too long. Some of the recalls would undoubtedly not have been recalls in the past, but rather treated as any other repair. So we're seeing the huge number of recalls, because these are items that have been unaddressed for a decade or more, and Mary Barra figures should be recalls. I can't blame her. Hopefully for GM, this initial transient will die down and the company will regain some trust with consumers. As it righfully should.
Another point: I think it would be really instructive to compare, say, brake failures in current cars against brake failures of pre-electronics cars, say cars from the 1970s or previous. Or steering problems. Or engine start problems. Or stalling problems. My bet is that in spite of the hyped up recalls lately, the result would be that cars have far, far fewer of these problems than they used to.
I thought the Flash degradation was caused by lots of erase/write cycles, as often happens on memory cards in tablets and smartphones. Data retention time with no memory writes should be 20 years, depending on which memory chip you use.
Of course, that doesn't help if you keep the car around for 20 years.
A Book For All Reasons Bernard Cole1 Comment Robert Oshana's recent book "Software Engineering for Embedded Systems (Newnes/Elsevier)," written and edited with Mark Kraeling, is a 'book for all reasons.' At almost 1,200 pages, it ...