Uncertainty in design is inversely proportional to time and money spent the original design as well as the design of the backup systems. There is an included factor that prevents uncertainty from actually reaching zero.
In theory, the more mission critical / life-critical a system is, the more time and money it gets put on redundancy and back-up systems. Reality doesn't always match up with theory in this case either though.
In a similar light, the less I know about the origin of Kopi Luwak amd Chorizo, the better I enjoy my breakfast with my favorites: scramble eggs and coffee. One has to enjoy the result that proven good sources of nurishment and flavor. By the same token, one has to understand at a high-level the implications of complex organic chemistry, and know alergy potential, the same way that basic modeling provides an estimate. Not necessarily a detailed and inclusive analysis of significant n-th order effects. Just an observation.
Back ups to back ups may not be a bad idea as long as that redundent back up does not rely on something that can or will fail in a catastrophic failure. Oh , about blow out preventers, didn't there instruments indicate there was an issue?
NASA can model, simulate, make, launch the Messenger satellite to orbit mercury after making it travel for more than 6 years and billions of miles . That disproves the uncertainty principle. Uncertainty of performance, uncertainty of quality etc is actually built by the product designer or implementer by overlooking some aspects. That 'let-go' thing is yhe root cause of all those uncertainties. In case of a mission like sending the satellite to Mercury such 'let-go' attitude is not tolerated and hence the project becomes successful.
I just checked out the 'Aerostatic Flutter' link.
Cool stuff! Though the solution that was presented seemed too simplistic for all the aeronautical engineers to have not considered as yet. I would love to hear what NASA had to say about their models.
To paraphrase Bob Pease, the results can never be more accurate than the model used. An incomplete model can produce very pretty results that are very wrong, although they may appear to be correct over some small range. With the failed blowout preventer on the oil well, the device is supposed to be locked onto the well so that no shift is possible.
As for taking control of the big screen, would you want that capability to fall into the hands of those internet folks trying to sell us viagra? Just think about that "unintended consequence."
The moral of the story being that in the case of oil wells that can pump the sea full of oil, or nuclear reactors that can pump the atmosphere full of radioactivity, a backup system is not enough. You have to have backups of backups. And that's where we always seem to go wrong.
I should have mentioned that story that emerged this week as part of the BP oil-leak investigation. The well apparently has a device whose sole purpose is to clamp the pipe shut in case of an emergency. The explosion shifted the pipe slightly so that the two opposing parts of the clamp couldn't form a tight fit.
You'd think they would have factored in potential misalignment of the pipe in an explosion, but maybe not.
To KD's point, models are models. And I supposed if we could factor in all the potential outcomes, we'd never move forward for fear of catastophe.
As we unveil EE Times’ 2015 Silicon 60 list, journalist & Silicon 60 researcher Peter Clarke hosts a conversation on startups in the electronics industry. Panelists Dan Armbrust (investment firm Silicon Catalyst), Andrew Kau (venture capital firm Walden International), and Stan Boland (successful serial entrepreneur, former CEO of Neul, Icera) join in the live debate.