Maybe there is too much anxiety over programming tools. The preemptive multitaskers used for years already provide most of the tools needed to organize apps into semi-autonomous threads. A little more work in that direction and a change in mindset among programmers will lead to apps that are equally at home running their threads across many processors or on a single processor.
"very costly to write, and _debug_ the complex software".
My experience has been the complete opposite, so I must caution against tarring all multicore app's with the same brush. Indeed, I have been on many projects where multiple CPUs made the job much easier; even without a performance problem, attempting to make one CPU do the whole job would have been a development nightmare!
A divide-and-conquer approach to the design resulted in tremendous simplification, and the logistics of managing the coding team and build-coherence were streamlined and much more productive, rather than a version-control mess.
hi my name is habib age is 22 years old i m from karachi pakistan i m pass 9th and spend my 10 years in computers and internet i m educated i m see lots of companys my email is firstname.lastname@example.org i m working and job what should i do u tell me email@example.com
Nice article Tom. I was hoping for a similar scan of the software support landscape as well as the hardware one. Who are the main players for multicore processors' programmming support? Any contributions out there?
I attended the Multicore conference. There was some good information. http://concretemulticore.wordpress.com/2010/09/30/approching-multicore-conference-live-blog/
The chips have arrived, the software and tools have a way to go.
There is no silver bullet tool that will allow legacy systems to move onto multicore. That seems to be what everyone is hoping for.
I hope some new approaches will to the software will come from all this. I am tired of writing Linux drivers.
There's a problem though. As discussed in the Approaching Multicore virtual conference, physics has gotten in the way and put a big speed bump into our comfortable continual increases of performance courtesy of the next semiconductor process node. Current leakage thru our tiny Xistors is burning us up. Something dramatic has to be done, and additional cores, sometimes identical, sometimes different/specific, are the only feasible way to circumvent the problem.
Absolutely this is not easy and it dumps a load on the programmer like he hasn't seen before. But propellers will only take aircraft so fast, and at some point you have to realize that the P-51 has it's limits. It's time to come into the jet age. And for the time being, the hardware guys have the easy job.
I hope this doesn't stall out our industry's great progress. At least now we've got masses of minds working on making the leap.
Don't get too excited. After having worked through 3 multi-processors fads in my software career I'm not such a believer. Dedicated cpu's have a place for sure but hardware people seem to always think it's the answer to all their problems. The hitch is that it becomes very costly to write, and _debug_ the complex software that is needed to get the promised efficiency and usually a larger single core will be more cost effective for bringing a product to market.
Fad 1, 1988 - the end of the mainframe era and the beginning of the workstation era. Multiprocessor architectrues were all the buzz but did not take off.
Fad 2, 1994 - Gaming industry, Sega Saturn, 7 processors including two Hitachi SH2's. It turned out to be more effective to just put one in sleep mode to avoid bus contention (months of s/w engineering time went into that conclusion).
Fad 3, 2008 - Playstation 3. Sony, Toshiba and IBM develop the Cell microprocessor. Although capable of lots of processing, for games it's no faster than the XBox 360 because firms don't invest the man-months of s/w engineering to figure out how to spread the problem over multiple cores.
Multicore was a big part of this week's Linley Tech Processor Conference in San Jose, as well. One of the messages we got is that hardware accelerators for security, RegEx, compression, packet processing, and other functions are being integrated on-chip with multiple CPU cores. Speakers from Cavium, LSI, IBM/Power, Applied Micro, and Tilera presented their new multicore offerings, most of which are either MIPS64 or PPC based.
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.