In his dotage, my old man (a chemistry Ph.D) used to say, as he slowly stirred his martini with a crooked finger, “the more we know, the less we know.”
As systems become increasingly complex because we overcome old technological hurdles, they also become more unpredictable.
One recent example is report that NHTSA and the NASA Engineering and Safety Center (NESC) published regarding unintended acceleration of Toyota automobiles. Michael Barr has an excellent report on it. In short, NASA said it couldn’t rule in but couldn’t rule out software problems as a culprit in the unintended acceleration problem.
And Stanford, via its Facebook page, has described how engineers are addressing the "aeroelastic flutter" problem, a complicated, unpredictable phenomenon. (P.S. don't watch this video if you happen to be on a plane with WiFi).
The more complex the software (and hardware), the harder it is to model and find corner cases. We seem to be falling behind in assessing the known unknowns and we’re completely in the dark about how to approach unknown unknowns.
We race up the abstraction ladder to try to keep our arms around design complexity, but that creates other issues. I attended the annual meeting of college engineering departments recently in Phoenix and one questioner from industry stood before a panel of academics shaking his head. It’s great to turn out really smart kids who know theory and can deal with abstraction, but if they struggle with basic engineering concepts, companies need to train (or retrain, perhaps) them.
How are we going to get better at anticipating the unknown unknowns? It is formal methods? It is impossible?
Uncertainty in design is inversely proportional to time and money spent the original design as well as the design of the backup systems. There is an included factor that prevents uncertainty from actually reaching zero.
In theory, the more mission critical / life-critical a system is, the more time and money it gets put on redundancy and back-up systems. Reality doesn't always match up with theory in this case either though.
In a similar light, the less I know about the origin of Kopi Luwak amd Chorizo, the better I enjoy my breakfast with my favorites: scramble eggs and coffee. One has to enjoy the result that proven good sources of nurishment and flavor. By the same token, one has to understand at a high-level the implications of complex organic chemistry, and know alergy potential, the same way that basic modeling provides an estimate. Not necessarily a detailed and inclusive analysis of significant n-th order effects. Just an observation.
Back ups to back ups may not be a bad idea as long as that redundent back up does not rely on something that can or will fail in a catastrophic failure. Oh , about blow out preventers, didn't there instruments indicate there was an issue?
NASA can model, simulate, make, launch the Messenger satellite to orbit mercury after making it travel for more than 6 years and billions of miles . That disproves the uncertainty principle. Uncertainty of performance, uncertainty of quality etc is actually built by the product designer or implementer by overlooking some aspects. That 'let-go' thing is yhe root cause of all those uncertainties. In case of a mission like sending the satellite to Mercury such 'let-go' attitude is not tolerated and hence the project becomes successful.
I just checked out the 'Aerostatic Flutter' link.
Cool stuff! Though the solution that was presented seemed too simplistic for all the aeronautical engineers to have not considered as yet. I would love to hear what NASA had to say about their models.
To paraphrase Bob Pease, the results can never be more accurate than the model used. An incomplete model can produce very pretty results that are very wrong, although they may appear to be correct over some small range. With the failed blowout preventer on the oil well, the device is supposed to be locked onto the well so that no shift is possible.
As for taking control of the big screen, would you want that capability to fall into the hands of those internet folks trying to sell us viagra? Just think about that "unintended consequence."
The moral of the story being that in the case of oil wells that can pump the sea full of oil, or nuclear reactors that can pump the atmosphere full of radioactivity, a backup system is not enough. You have to have backups of backups. And that's where we always seem to go wrong.
I should have mentioned that story that emerged this week as part of the BP oil-leak investigation. The well apparently has a device whose sole purpose is to clamp the pipe shut in case of an emergency. The explosion shifted the pipe slightly so that the two opposing parts of the clamp couldn't form a tight fit.
You'd think they would have factored in potential misalignment of the pipe in an explosion, but maybe not.
To KD's point, models are models. And I supposed if we could factor in all the potential outcomes, we'd never move forward for fear of catastophe.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.