It is human nature to believe that we won't repeat a "mistake" made by someone else. A key lesson to me is that (as a bare minimum), every one of these lessons should be added to the readiness checklist for new projects. While I'm not in the aerospace arena, I find it is also very helpful to record the root causes of each programming error that I experience. This not only increases my awareness of the likely errors (so I avoid making them) but also gives some hints of where to look for trouble. (Years ago, I found that my software problems were most often associated with mismatched global variables.)
I had a story I told about auto repair:
'Guy drags in his car, tells you the fuel pump is bad. You replace the fuel pump; car still doesn't work. You ask the guy if he wants you to fix the car.' End of story? No...you, as the story teller, are and arrogant s**t and should have done your diagnostic work regardless of what the owner said.
This applies to specifications passed on to software/hardware engineers by project manglers - question first, then develop. The contrary is risking the mission and your career/company.
Clementine didn't fail because of firmware, software, or hardware. It failed because of management. PERIOD. The comment, "... but the development schedule had been so compressed that the programmers never had time to write the code to turn it on" tells the root cause. Schedules are not set often enough by the people with a) the most at stake (the engineers, who will be fired for failing), and b) the people with the most sense as to a reasonable estimate (again, the engineers).
This lesson is less for the programmer and more to the project managers running these developments. It's just too easy for short-sighted managers to focus on progress based on lines-of-code-written rather than on spending time developing a quality architecture with the clock ticking. These clueless folks fear that all the code won't get written if their people spend their time designing and not writing actual code.
I firmly agree with Jack and always have; software jocks get accused of being cowboys with their code and pushing the "churn and burn" model.
And yet one of our clients is sending some of us to Test Driven Development training. The talk there is all about "the tests embody the requirements" and "emergent design."
Two ends of the spectrum.
But in his article, isn't Jack doing this very thing? Accusing software jocks of being cowboys with their code and pushing the 'churn and burn' model (even whilst flatly contradicting himself in his own article as to the problem's true cause?)
On the one hand Jack ascribes the problem as being that of programmers' failure to come up with the perfect design from outset (dream on, bro), but then states that the problem was a "compressed schedule" which, by its very nature, forestalls any such luxury! And who came up with this compressed schedule, anyway? The programmers?
I don't mean to rain on your parade, but Jack's head is evidently planted firmly where the sun don't shine.
You firmly agree with Jack and always have? Surely you jest. This entire article is a comedy of errors. The man is clueless.
Little Jack Horner sat in the corner, eating his Christmas pie. He put in his thumb and pulled out a plum, and said "What a good boy am I!"
In this nonsensical rhyme Jack Horner concludes he is a 'good boy' based on what evidence? He pulled out a plum. Why would this act signify anything of the sort to anyone with any sense? It is a non-sequitur; exactly the same sort of non-sequitur Mr. Jack Gannsle draws in his planned presentation. Non-sensical. He even states the actual problem and then draws from it a conclusion having nothing whatsoever to do with his example - and everything to do with project mismanagement. With managers like this on-the-job, is it any wonder Clementine failed?
Furthermore, the solution Gannsle proposes would not have succeeded in Clementine's development environment regardless. Why not? There simply wasn't time. Does Gannsle think that all this careful planning beforehand would have sidestepped Clementine's failure when the reality was that this planning would never have occurred in the first place? At what point in Clementine's software development effort would there have been time, when Gannsle himself states that there wasn't enough time even to 'turn on the code,' (whatever that means).
Finally, in his proposed solution, Gannsle goes on to ignore *what else* has gone on meanwhile, in his forty years' overview. What about unit testing? Testing soon and often? Making use of reusable and thoroughly-tested software components and libraries? Best practices for robust, reliable software development? Redundancy? A whole host of techniques and technologies which have been developed over these past forty years, tested in the Real World and deployed in same with huge success? No, design near-perfect software from the outset (how will he know it's near-perfect), then write the code. No mention of testing?
Why is this noob giving a presentation at Design West?
Clementine was developed to space qualify technologies for NRL; which realized that it could also be used to obtain better data on the moon in the process as a secondary mission. It was completely succesful in these missions, establishing a "cheaper, faster, better" baseline.
Having completed its missions, but still in good condition, ti was given a futher mission to Geographos. To say that its failure to accomplish that additional mission means it was a failure is inaccurate. However, we can still learn lessons from that failure.
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.