In his dotage, my old man (a chemistry Ph.D) used to say, as he slowly stirred his martini with a crooked finger, “the more we know, the less we know.”
As systems become increasingly complex because we overcome old technological hurdles, they also become more unpredictable.
One recent example is report that NHTSA and the NASA Engineering and Safety Center (NESC) published regarding unintended acceleration of Toyota automobiles. Michael Barr has an excellent report on it. In short, NASA said it couldn’t rule in but couldn’t rule out software problems as a culprit in the unintended acceleration problem.
And Stanford, via its Facebook page, has described how engineers are addressing the "aeroelastic flutter" problem, a complicated, unpredictable phenomenon. (P.S. don't watch this video if you happen to be on a plane with WiFi).
The more complex the software (and hardware), the harder it is to model and find corner cases. We seem to be falling behind in assessing the known unknowns and we’re completely in the dark about how to approach unknown unknowns.
We race up the abstraction ladder to try to keep our arms around design complexity, but that creates other issues. I attended the annual meeting of college engineering departments recently in Phoenix and one questioner from industry stood before a panel of academics shaking his head. It’s great to turn out really smart kids who know theory and can deal with abstraction, but if they struggle with basic engineering concepts, companies need to train (or retrain, perhaps) them.
How are we going to get better at anticipating the unknown unknowns? It is formal methods? It is impossible?
Please forgive the following snide comment: "What? Are there known unknowns?"
OK, I'm over it! Seriously - my experience has been that understanding basic concepts and seeing those concepts in action, or better yet, working with them in a "real world" manner will probably be of greatest benefit. Theoretical exercises certainly can work to demonstrate the fundamental concepts, but the real world is full of nth-order effects and dependencies that often seem to contradict theory, because the theoretical model is incomplete or incorrect. Read National Semi guru Bob Pease's work if you want to know how often theory (specifically SPICE models and simulation) do not match real world fact.
Models are just that - models. It is kind of like going into war battles per the model(s) on the planning room table, and wondering why it didn't actually work out the way it was planned. Not only is the real world different than models, real people in real time/world situations don't give a damn about a model when reality is staring them in the face.
I should have mentioned that story that emerged this week as part of the BP oil-leak investigation. The well apparently has a device whose sole purpose is to clamp the pipe shut in case of an emergency. The explosion shifted the pipe slightly so that the two opposing parts of the clamp couldn't form a tight fit.
You'd think they would have factored in potential misalignment of the pipe in an explosion, but maybe not.
To KD's point, models are models. And I supposed if we could factor in all the potential outcomes, we'd never move forward for fear of catastophe.
The moral of the story being that in the case of oil wells that can pump the sea full of oil, or nuclear reactors that can pump the atmosphere full of radioactivity, a backup system is not enough. You have to have backups of backups. And that's where we always seem to go wrong.
To paraphrase Bob Pease, the results can never be more accurate than the model used. An incomplete model can produce very pretty results that are very wrong, although they may appear to be correct over some small range. With the failed blowout preventer on the oil well, the device is supposed to be locked onto the well so that no shift is possible.
As for taking control of the big screen, would you want that capability to fall into the hands of those internet folks trying to sell us viagra? Just think about that "unintended consequence."
I just checked out the 'Aerostatic Flutter' link.
Cool stuff! Though the solution that was presented seemed too simplistic for all the aeronautical engineers to have not considered as yet. I would love to hear what NASA had to say about their models.
NASA can model, simulate, make, launch the Messenger satellite to orbit mercury after making it travel for more than 6 years and billions of miles . That disproves the uncertainty principle. Uncertainty of performance, uncertainty of quality etc is actually built by the product designer or implementer by overlooking some aspects. That 'let-go' thing is yhe root cause of all those uncertainties. In case of a mission like sending the satellite to Mercury such 'let-go' attitude is not tolerated and hence the project becomes successful.
Back ups to back ups may not be a bad idea as long as that redundent back up does not rely on something that can or will fail in a catastrophic failure. Oh , about blow out preventers, didn't there instruments indicate there was an issue?
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.