DO-178B was brand new when That first airliner was done -- the use of a spiral development model was essential in allowing new developments like COTS RTOS to more fully get fleshed out. Good tools also were important -- the hardware was fully simulated before board layout was finished -- allowing for a solid foundation for the code to run on.
I am not at all familiar with the formal development processes listed, but have developed effective techniques over the years which have made my code and systems work well, but not well enough to trust in a system where life is at stake. (I would seek out proven development techniques as the author did if the need arose.) The simplest one is documenting the code, which serves to clairfy the solution to the task at hand and make effective review by a thrid party easier. For some reason, dedicated software engineers seem to shun documentation. One of my favorite interview questions for new engineers is how to decide how large to make the stack space in a microcontroller when using C programming.
I've noticed younger engineers educated and trained in application in a PC environment have less awareness of resource pitfalls and system implications of wasting resources by inefficient code. The solution to bloated code of more memory and faster clocks can have negative implications on cost, package size, weight, emi/rfi, complexity/bugs etc.
Yes the way software is written for avionics, medical is definitely different from how its written for entertainment devices. And the kind of software testing is done also varies. Its nice that all well developed standards are in place, it can help softwrae engineers and the managers. I am sure companies who work in these specific areas use these standards and follow them.
The projects that came after that all used formal development processes -(DO-178B, DO-254) etc -- The Flight Management System I did some preliminary work on has gone on to be used in one of only two airliners with over one million flight hours that are fatality free -- per the latest records I could find.
Larry said: One thing that I have noticed over the years is that the definition of "good code" keeps changing. When I started back in the dark ages good code meant code that was as compact as possible.
I'm going to repeat my favorite quote from C.A.R. Hoare:
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.
That clean, simple design usually has the property that it's very compact, since complexity generally adds SLOCs while simple designs with elegant abstractions do the opposite. What generally happens nowadays is that as far as management is concerned, "good code" means "cheap code", i.e., cheap to get out the door. Maintenance costs come out of a different manager's budget. As the joke says: "Quality is Job 1.1".
@LarryM99: One thing that I have noticed over the years is that the definition of "good code" keeps changing.
You make a very good point. I also agree that it's worth the extra resorces it takes to make one's code robust and secure. The one think I worry about is that many of today's programmers don;t seem to have any conception of trying to keep the memory footprint low and cut down on the amount of clock cycles it takes to do something. In somecases I understand that the savings aren;t worth the effort -- but in other cases I've seen the most appaling code that even I know does things in a really wasteful manner that cannot fail to negatively impact the overall system performance to a noticable extent.
One thing that I have noticed over the years is that the definition of "good code" keeps changing. When I started back in the dark ages good code meant code that was as compact as possible. It was worth a lot of effort to save a few bytes, and if a programmer did so by using arcane knowledge of the specific system then he (and the pronoun was almost completely appropriate) was considered a valuable asset. Over the years the definition of "good" evolved to mean robust code that was transportable and reusable. That code uses a lot more resources on a target, but it allows much more complex applications. As the parts cost of full SoC implementations of 32-bit processors approaches that of MCUs it makes more and more sense to 'waste' that computing horsepower in even very small systems.
Even knowing what I know, I still expect electronic systems to work when I buy them -- I still assume that whowever created the hardware and the software had a clue -- I think the thing is that I don;t want to think about how klugy a lot of this stuff actually is (sad face)
As we unveil EE Times’ 2015 Silicon 60 list, journalist & Silicon 60 researcher Peter Clarke hosts a conversation on startups in the electronics industry. Panelists Dan Armbrust (investment firm Silicon Catalyst), Andrew Kau (venture capital firm Walden International), and Stan Boland (successful serial entrepreneur, former CEO of Neul, Icera) join in the live debate.