Breaking News
News & Analysis

Panel: Wall ahead in multicore programming

5/4/2011 00:17 AM EDT
19 comments
NO RATINGS
More Related Links
View Comments: Newest First | Oldest First | Threaded View
Page 1 / 2   >   >>
TFCSD
User Rank
CEO
re: Panel: Wall ahead in multicore programming
TFCSD   10/7/2011 4:14:56 AM
NO RATINGS
If I remember correctly, the problem kind of started in the late 90’s for most non-super computers when single core processors ran into problems going past 4 Ghz creating power consumption problems of overheating. This problem was side stepped by multiprocessors for a while. If you could turn a 100% sequential program into a 100% parallel program you can argue that massive parallelism is worthwhile but most programs are still a mix and can only be partly parallelized thus dropping the effectiveness of massively multicored processing (including multicore overhead). The problem is many programmers are going to have problems maximizing their code for parallel processing. The solution is the compiler is going to need to do the work (like when they are assigning registers etc.) this way only a few compiler programmers need to go nuts leaving the rest of use retaining some form of sanity. Hopefully it can be all abstracted to make it simpler. As for a wall? Life is full of them and hopefully the evolutionary mix of speed increases and better compilers will save us.

Ian  Joyner
User Rank
Rookie
re: Panel: Wall ahead in multicore programming
Ian Joyner   5/14/2011 1:36:34 AM
NO RATINGS
Mapou, I believe we are talking more about the von Neumann model rather than Turing, who was much more a visionary in complex interactions in systems, although he did introduce the very simple Turing machine to reason about computability. John Backus asked the question "Can we be liberated from the Von Neumann Style": http://www.stanford.edu/class/cs242/readings/backus.pdf He spent the rest of his life searching for this. O do kind of agree with you sentiments though. We have had a generation of computer scientists who ignored what went on in Burroughs machines while they chased low-level performance via RISC. We should become reacquainted with this architecture and its multiprocessor/multiprocessing capabilities: http://en.wikipedia.org/wiki/Burroughs_large_systems Bob Barton who designed these machines would also share your sentiments since he wrote about the low-level cults that had become entrenched (in the 60s) rather than true system-level architecture. There are a few papers of his around the web.

DrQuine
User Rank
CEO
re: Panel: Wall ahead in multicore programming
DrQuine   5/6/2011 11:32:48 PM
NO RATINGS
Certainly there are processes running on computers (graphic display, background virus checking, decryption, document scanning, PDF creation, music playing) that could be allocated to separate cores and then free up the main thread to run faster. When I look at my sluggish computer, many of the dozens of threads that are running could be off-loaded. Even if everything cannot be perfectly parallel processing optimized, every bit helps. Also, when systems get into a gridlocked state, it would seem an ideal opportunity to instantly spawn a parallel process and return some control to the user.

pbinCA
User Rank
Rookie
re: Panel: Wall ahead in multicore programming
pbinCA   5/6/2011 4:30:59 PM
NO RATINGS
Any mapping onto parallel architecture (if it is to accomplish any speedup) MUST understand the run-time flow of information processing. Analyzing source code just won't cut it. If the system processes an external datastream, it gets even more hairy understanding how a parallel processing system reacts in realtime. To do what you are suggesting (automated parallelization as a general-purpose tool) would require that the system purpose or intention be captured in the source code. Clearly, system purpose resides in its human designers, and only gets implemented in the code. This implies that human designers are needed to parallelize an arbitrary serial process (and have it work faster).

Rishiyur.Nikhil
User Rank
Rookie
re: Panel: Wall ahead in multicore programming
Rishiyur.Nikhil   5/5/2011 7:45:11 PM
NO RATINGS
Re. 'we already have too many of them [programming languages]. "First need to understand what it means to be parallel".': Perhaps these are not separate concerns. George Boole said, in his "Investigation of the Laws of Thought", that, "language is an instrument of human reason, and not merely a medium for the expression of thought". See also Ken Iverson's 1979 Turing Award lecture, "Notation as a Tool of Thought". Our "too many" sequential languages fundamentally restrict how we even think about computation.

Rishiyur.Nikhil
User Rank
Rookie
re: Panel: Wall ahead in multicore programming
Rishiyur.Nikhil   5/5/2011 6:12:54 PM
NO RATINGS
Re. "A tool should be able to find and point out the dependency that keeps tasks A and B from running in parallel." This problem is almost as old as computing itself. In the 1960s, when "vector machines" first appeared (CDC 6600, Cray 1), this was called the "dusty deck" problem, i.e., couldn't some tool take our existing Fortran codes ("dusty decks") and automatically parallelize them to run on the new vector machines? 50 years of research return an emphatic "No!". The first issue is that parallel algorithms are often different from sequential algorithms; you've already lost the game if you start with sequential algorithms. Second, the dependency analysis is simply computationally intractable, except in so-called "loop-and-array" codes: well-structured, properly nested FOR-loops with affine array indexes--this is a very niche success.

aivchenko
User Rank
Rookie
re: Panel: Wall ahead in multicore programming
aivchenko   5/4/2011 9:10:23 PM
NO RATINGS
The problem is not so much in programming for parallel processes (it can be successfully done in Verilog) but rather that the single-thread Turing model is not very good for parallelism. And even bigger problem to overcome is the human brain which can manipulate with 5 to 7 objects at a time. So without development of the new tools it is highly improbable that multicore approach will allow continuations of the Moore law

fdunn0
User Rank
Rookie
re: Panel: Wall ahead in multicore programming
fdunn0   5/4/2011 8:18:20 PM
NO RATINGS
As long as Virtualization can segment cores then I don't see the big issue right now. Yes, the wall was hit with the integrated manufacturing processes of today but that doesn't mean the wall will be their tomorrow. On top of that, many a hardware geek has super-cooled current processors and gotten an almost 200% speed increase. Maybe people are just overlooking trying to scale better cooling solutions.

Code Monkey
User Rank
Rookie
re: Panel: Wall ahead in multicore programming
Code Monkey   5/4/2011 7:18:50 PM
NO RATINGS
Manual rewrites are expensive. Why can't they be automated or at least semi-automated. It's a matter of refactoring, with tools finding dependencies among modules and suggesting ways to refactor in a parallel-friendly way. A tool should be able to find and point out the dependency that keeps tasks A and B from running in parallel.

Eric Verhulst_Altreonic
User Rank
Rookie
re: Panel: Wall ahead in multicore programming
Eric Verhulst_Altreonic   5/4/2011 7:16:40 PM
NO RATINGS
The solution is a lot closer than most people think. It looks like a lot of knowledge was lost in the last decade. See http://www.electronics-eetimes.com/en/News/full-news.html?cmp_id=7&news_id=222906867 (Title: Traditional Programming has hit the power wall).

Page 1 / 2   >   >>
Top Comments of the Week
August Cartoon Caption Winner!
August Cartoon Caption Winner!
"All the King's horses and all the KIng's men gave up on Humpty, so they handed the problem off to Engineering."
5 comments
Like Us on Facebook

Datasheets.com Parts Search

185 million searchable parts
(please enter a part number or hit search to begin)
EE Times on Twitter
EE Times Twitter Feed
Radio
LATEST ARCHIVED BROADCAST
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.
Flash Poll