Out of respect for Mr.Gannsle, I'd like to point out that he's been doing embedded system work since the first microprocessors rolled off the line. Perhaps you could extend him the common courtesy of at least Googling him to find out he is a highly respected expert who was around prior to the term "noob" becoming part of Gen X vocabulary. You may find out you'd actually like to attend his lecture or his excellent course.
Clementine was developed to space qualify technologies for NRL; which realized that it could also be used to obtain better data on the moon in the process as a secondary mission. It was completely succesful in these missions, establishing a "cheaper, faster, better" baseline.
Having completed its missions, but still in good condition, ti was given a futher mission to Geographos. To say that its failure to accomplish that additional mission means it was a failure is inaccurate. However, we can still learn lessons from that failure.
Little Jack Horner sat in the corner, eating his Christmas pie. He put in his thumb and pulled out a plum, and said "What a good boy am I!"
In this nonsensical rhyme Jack Horner concludes he is a 'good boy' based on what evidence? He pulled out a plum. Why would this act signify anything of the sort to anyone with any sense? It is a non-sequitur; exactly the same sort of non-sequitur Mr. Jack Gannsle draws in his planned presentation. Non-sensical. He even states the actual problem and then draws from it a conclusion having nothing whatsoever to do with his example - and everything to do with project mismanagement. With managers like this on-the-job, is it any wonder Clementine failed?
Furthermore, the solution Gannsle proposes would not have succeeded in Clementine's development environment regardless. Why not? There simply wasn't time. Does Gannsle think that all this careful planning beforehand would have sidestepped Clementine's failure when the reality was that this planning would never have occurred in the first place? At what point in Clementine's software development effort would there have been time, when Gannsle himself states that there wasn't enough time even to 'turn on the code,' (whatever that means).
Finally, in his proposed solution, Gannsle goes on to ignore *what else* has gone on meanwhile, in his forty years' overview. What about unit testing? Testing soon and often? Making use of reusable and thoroughly-tested software components and libraries? Best practices for robust, reliable software development? Redundancy? A whole host of techniques and technologies which have been developed over these past forty years, tested in the Real World and deployed in same with huge success? No, design near-perfect software from the outset (how will he know it's near-perfect), then write the code. No mention of testing?
Why is this noob giving a presentation at Design West?
But in his article, isn't Jack doing this very thing? Accusing software jocks of being cowboys with their code and pushing the 'churn and burn' model (even whilst flatly contradicting himself in his own article as to the problem's true cause?)
On the one hand Jack ascribes the problem as being that of programmers' failure to come up with the perfect design from outset (dream on, bro), but then states that the problem was a "compressed schedule" which, by its very nature, forestalls any such luxury! And who came up with this compressed schedule, anyway? The programmers?
I don't mean to rain on your parade, but Jack's head is evidently planted firmly where the sun don't shine.
You firmly agree with Jack and always have? Surely you jest. This entire article is a comedy of errors. The man is clueless.
I firmly agree with Jack and always have; software jocks get accused of being cowboys with their code and pushing the "churn and burn" model.
And yet one of our clients is sending some of us to Test Driven Development training. The talk there is all about "the tests embody the requirements" and "emergent design."
Two ends of the spectrum.
This lesson is less for the programmer and more to the project managers running these developments. It's just too easy for short-sighted managers to focus on progress based on lines-of-code-written rather than on spending time developing a quality architecture with the clock ticking. These clueless folks fear that all the code won't get written if their people spend their time designing and not writing actual code.
Clementine didn't fail because of firmware, software, or hardware. It failed because of management. PERIOD. The comment, "... but the development schedule had been so compressed that the programmers never had time to write the code to turn it on" tells the root cause. Schedules are not set often enough by the people with a) the most at stake (the engineers, who will be fired for failing), and b) the people with the most sense as to a reasonable estimate (again, the engineers).
I had a story I told about auto repair:
'Guy drags in his car, tells you the fuel pump is bad. You replace the fuel pump; car still doesn't work. You ask the guy if he wants you to fix the car.' End of story? No...you, as the story teller, are and arrogant s**t and should have done your diagnostic work regardless of what the owner said.
This applies to specifications passed on to software/hardware engineers by project manglers - question first, then develop. The contrary is risking the mission and your career/company.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.