My own since 20 years:
Design parallel, optimize by sequentialisation.
Software is modelling and most of the things we model are inherently concurrent (or parallel). Almost anything can be expressed as a set of "Interacting Entities".
The reason we can get more out of the computer by sequentialisation is because the hardware in inherently designed to be sequential. I call that the von Neuman syndrome. To get most out of 2 or more processors, the program must be designed as a parallel one. Getting parallelism out of a sequential program is really an exercise in reverse engineering.
One benefit of the array computing approach is that natural parallelism can be exploited, albeit in an inefficient way. The difficulty with this approach is that it fails to deal with I/O. Possibly some mix of this approach with traditional approaches are required. I would note that in many ways the array computing approach maps into CSP.
The real problem is that we think sequentially, understand natural parallelism, and have difficulty being creative enough to turn serial streams of processing into parallel algorithms.
This is a good comment. In image processing often this is very clear by cutting the image into sections; however in many problems the view of early parallel programmers seems more appropriate:
Paraphrased it was: "parallel algorithms for the real world are like so much smoke" - creators of Illiac IV, a 64 processor SIMD machine build in 1969.
I think that many problems become serial and this is the limitation of multicore. System designers and programmers can think and debug in parallel when natural parallelism exists. They can rise to the occasion and put multiple data processing streams on a shared multicore platform. What they can't do is take a clearly sequential problem and make it parallel. This has been the barrier for the past 30 years.
Good question. CSP does look a bit ugly but not so bad as C. I guess it needs a makeover like occam got in its incarnation as a hardware language wrapped up in C. Better yet, a graphical (3D?) programming toolset.
Then there's the overlooked fact that there's not enough "processing" to benefit from multi-things in many situations. In a decision/logic tree there can be many initial decisions that are very simple where results must be combined in order to proceed to the next level. Dispatching a task or thread to test if a number is less then zero probably has ten times as much overhead as the compare and to do it to 10 processors for 10 compares wastes a lot of power and time. Here is where a chip can beat the processor by simply testing the sign bit of a register as an instance.
It appears no one has mentioned "data dependency" which kills parallel computing because it is too much for a mediocre programmer to take it on.
The hard part of parallel computing is to break a data dependent sequence problem into multiple data independent pieces to leverage multiple cores.
"I decided long ago to stick to what I know best. Other people understand parallel machines much better than I do; programmers should listen to them, not me, for guidance on how to deal with simultaneity." Donald Knuth, professor emeritus at Stanford
"The wall is there. We probably won't have any more products without multicore processors [but] we see a lot of problems in parallel programming." Alex Bachmutsky, chief architect at Nokia Siemens Networks.
Which reminds me of Microsoft's short-lived ad campaign whose tag line was: "Imagine life - without walls."
Now, I don't think Mr. Bachmutsky's 'wall' was the same sort of wall M$ had in mind (lord knows for them I'm still it's still over the horizon), but I must say that the very first thing that came to mind when I saw Microsoft's ridiculous advert on that billboard was this:
"Look, Microsoft, if you don't have walls then what possible use would you have for Windows?"
I wrote Microsoft and asked about it, but no one replied. Imagine my surprise.
Then, not long ago Microsoft released a product which they dubbed 'Azure.' Remind you of anything? Azure? Blue? Blue, as in "Blue Screen of Death?"
I think there's a Linux mole in Marketing.
I like Kernigan's quote the best. It goes to the heart of what is most difficult about parallel programming: debugging. I don't listen to the arm-chair-experts who keep telling us how to write code with accurate modelling so that the program will be bug free. That sort of thing is just wishful thinking. No large program that I know of is bug free, and almost all of them are sequential programs. Now try designing a bug free concurrent program. If it is large enough, it will have bugs (most likely many more than an average sequential program.) Then get ready for your hardest task ever: debugging it!
NASA's Orion Flight Software Production Systems Manager Darrel G. Raines joins Planet Analog Editor Steve Taranovich and Embedded.com Editor Max Maxfield to talk about embedded flight software used in Orion Spacecraft, part of NASA's Mars mission. Live radio show and live chat. Get your questions ready.
Brought to you by