Whether modifying an existing application or writing entirely new code, parallel applications can be much more challenging to work with than their sequential counterparts.
Without a doubt, the high-level abstractions and APIs commonly used today greatly simplify the process, but most methods still require manual identification of parallel code sections, as well as consideration of such issues as race conditions and synchronization among parallel tasks. And as the number of CPU cores on a single chip grows, the corresponding applications that take advantage of this hardware will be even more likely to result in parallel programming pain.
A class of programming languages based on a principle called dataflow not only can greatly simplify the process of developing code for multicore processors today but also could become a key strategy for leveraging the many-core CPUs of the future.
In seeking solutions to the parallel programming challenge, it is helpful first to consider how the current-day mismatch between programming languages and parallel processor architectures came about.
As processor hardware has evolved, embedded engineers and computer scientists have customarily written programs that directly relate to how that hardware is structured. At the most basic level, this concept is highlighted in assembly language, where processor registers themselves are directly manipulated.