Microprocessors are marching into a multicore future to keep delivering performance gains without frying in their own heat. But mainstream software has yet to find its path to using the new parallelism.
Proprietary programming approaches are gaining traction in a handful of applications. It could take a decade or more, however, for the brunt of the industry to catch up in any organized fashion, and the way forward goes through some tough terrain.
"Anything performance-critical will have to be rewritten," said Kunle Olukotun, director of the Pervasive Parallelism Lab at Stanford University, one of many research groups working on the problem seen as the toughest in computer science today.
Click on image to enlarge.
"Eventually, you either have to rewrite that code or it will become obsolete," said Olukotun, who will deliver a keynote on the topic this month during the Multicore Virtual Conference.
"This is one of the biggest problems telecom companies face today," said Alex Bachmutsky, a system architect and author of an upcoming book on telecom design. "Their apps were not written for multiple cores and threads, and the apps are huge; they have millions and tens of millions of lines of code."
The ubiquitous C language "is the worst [tool] because it is inherently sequential and obscures any parallelism inherent in an algorithm," said Jeff Bier, president of DSP consulting firm Berkeley Design Technology Inc.
In a study conducted earlier this year by TechInsights, the publisher of EE Times, 62 percent of the embedded systems developers polled said their latest project was written in C. A further 24 percent said they used C++.