Parallel programming is easy if the nature of a problem nature is paralleled, such as many image processing. Parallel programming is hard if the nature of a problem is sequential, such as division.
I'll be very happy to see Professor Patt show an efficient way to do a 4096-bit number divided by a 2048-bit number in parallel. :)
What you need is to learn and teach parallelism at the highest level and with an appropriate language. Then the parallel programming nature of problems become obvious and not hard at all.
I don't know what happened to him after all these years but Jon Webb (from CMU's Vision and Autonomous Systems Center) did some GREAT work on this. The Adapt language was the perfect example of what I am talking about. A general language that let's you think about and break down the inherently parallel processes in image processing into a very simple language and set of constructs that can then be compiled on to any machine architecture efficiently, no matter how linear or how parallel that architecture is and take full advantage of any parallelism afforded. You could run on a basic Intel x86 machine, a toroidal iWarp of any number of nodes, or even a Cray Y-MP and you saw very efficient scaling in terms of processor resources at your disposal.
Parallel Programming is easy when you know the right language to use. Developing those languages is the hard part and a part of computer science (in my opinion of course) that has been withering away for at least 15 years... I believe due to the mostly brute-force, inelegant methods thrown at the ever increasing resources available. Elegance is gone... mapping elegance onto crude architectures with bad compilers makes parallel programming appear hard...
Parallel programming is not hard at all... it's only hard when you've been raised on languages like Fortran, Basic, C, and these days, java, c++ and all the rest...
The key is how you are taught to think about problems. In the old days when people were taught line numbers and GOTO statements, that was the biggest disservice one could have done to a budding computer software person. I of course am and will always be a Lisp/Scheme believer when it comes to teaching basic computer science principles, ideas, and programming. The right language keeps you from developing bad thought patterns, bad habits, and bad ways of thinking about solving the problem. A bad language corners you into the classic problem that everything looks like a nail when all you have is a hammer...
To learn how to program parallel structures, you need the right language (and cthreads is NOT it...) All these things are hacks on top of your classic SISD Von Neumann architecture type thinking. Harvard Architectures, MIMD machines, etc. are all programmed with the old ideas cast in stone, and with simple additions to exploit some part of the new underlying architecture.
As someone mentioned above, parallelising simple pieces of code is easy. Maintaining and managing parallelism is very hard. The layers of abstraction are indeed a hurdle to parallel programming. However, the layers of abstraction are there for a reason. They allow you to break problems in simple pieces that can be understood and implemented well. The complexity of understanding sequential code grows exponentially as the levels of abstraction are removed. It would be hard to manage and maintain a few million lines of code without those levels of abstraction.
Parallel programming is hard. The tool support for debugging issues is archaic. If there are constructs in programming languages that allow for simple synchronization, that would help break problems into simpler pieces and easier to program.
There is a limit to human comprehension (yes all applications have bugs). A sequential model allows simple checkpoints to break pieces into simpler pieces and test or debug them effectively. Parallel programming adds a lot of complexity and understanding issues across various levels of abstraction is easier said than done.
Parallel programming may or may not be hard (pun intended) but it certainly seems to be at odds with the linear "programming model" used in conventional speech and writing.
What is definitely hard is getting an industry of millions of programmers to change from the status quo (C/C++) without clear economic benefits.
Where 1 or 2 digital orders of magnitude of benefit can be shown (x10 or x100) change over is almost immediate. Where the benefit is present but marginal -- a few tens of percent or x2 -- it is not so easy and often does not happen.
Vested interests and hidden costs maintain the old ways of doing things [return to top of story :-) ].
cool, :D he does seem hard to find online generally...
you should get him to come here under "transputer dave"
and have him write some articles for EEtimes, im sure they would take them :P
and OC many people would love to read his thoughts and many subjects.
I would even go one step further. In 1988 (that's already years after Hoare wrote the book on CSP), Chandy and Misra (Uni TX, Austin) write "Parallel Program Design", basically a book on using UNITY (a kind of formal notation) for specifiying what a program should do, not on how it should do it. What the book made clear to me, is that there are no parallel algorithms. You have algorithms and they have sequential as well as parallel (or mixed) implementations. Often the parallel version is a lot easier and simpler. Even a quicksort is very easy to rewrite in a parallel version.
The issue is that the processor architects and hence the programming language developers suffer from the "von Neuman" syndrome. Hence, they can only think in terms of shared memory, shared state spaces and sequential logic. And of course, this is hard because you must control the state space in time to keep it consistent.
If you think in terms of concurrency, the problem largely goes away because the state spaces become local. How does one achieve determinism? By synchronisation, ths makes the logic time-independent. Most RTOS do this quite well. Thread based programming doesn't and that's why it is hard. Unfortunately, this bad reputation has spread to the whole notion of parallel programming. For the wrong reasons. Think parallel, optimise sequentially (because you exploit local propertie. Makes the cod emaybe faster, but you loose the scalability that a good concurrent programming model offers.
OK - I challenge anyone who thinks "parallel programming is hard" to pick up a book by Per Brinch Hansen - "Parallel Programming Paradigms" - read that and then tell me it's hard.
Better still, you don't even have to wait - you can go and read his work here:
I cited two examples of nondeterminism - one by calculus model (CSP) and one by architecture implementation (memory hierarchies). One I believe causes a problem (the latter), the other I think does not.
I completely agree that "events that are supposed to be deterministic should remain so". But I cannot think of any examples where this is not true. Even non-deterministic timing from memory hierarchies is generally thoroughly documented and you can design around the worst-case to give you either asymptotic estimates or absolute bounds on program performance. And in some cases non-determinism is by design and therefore does not break your rule above.
If I were being really picky, your original statement demanded "temporal" determinism, whereas your follow-up claim relates to essentially spatial determinism.
With your final point - what is this "concurrent-threads model used by the multicore industry"? Which model are you referring to? Why does it not guarantee determinism? Why is determinism needed?
Blog That A-Ha Moment Larry Desjardin 12 comments Have you ever had an a-ha moment? Sure, you have. The Merriam-Webster dictionary defines it as "a moment of sudden realization, inspiration, insight, recognition, or ...