San Jose, Calif. -- This may go down as the year the electronics industry woke up to the full breadth and significance of the trend to multicore processors.
Next year promises some stepwise advances, including the advent of the first multicore benchmarks and applications programming interfaces. But in time it may also be remembered as the year the industry realized what designers of multicore software and interconnects already know: Many years of hard work lie ahead.
Intel Corp. and archrival Advanced Micro Devices Inc. helped popularize the trend to multicore processors. They grabbed headlines repeatedly in 2006, leapfrogging each other in their race to field two- and four-core CPUs for mainstream computers.
Meanwhile, the much quieter embedded world pushed the boundaries with processors that in some cases packed more than 200--even 500--cores on a die. Further stretching the limits, cell phone chip makers like Texas Instruments routinely shipped processors with a variety of cores. Even low-cost electronic toys got into the act, as LSI Logic Corp. rolled out an ASIC platform using three or four cores of different types for consumer gadgets.
The number of cores on a die replaced megahertz as the new metric for microprocessors, as silicon engineers shifted their focus to increase performance while keeping a lid on power and heat. All the activity made it clear that a new day had arrived, and that CPUs and software will need to respond.
"You can buy multicore laptops, desktops, cell phones, PDAs, multicore anything--even my 7-year-old is using a multicore system," said Anant Agarwal, a longtime pioneer in parallel computing and professor of electrical engineering and computer science at the Massachusetts Institute of Technology. "Because of this widespread adoption, software developers have finally come to grips with the fact that programming as they knew it is not going to be the same any more--they will be doing parallel programming going forward," Agarwal said.
"In fact, one person sent me an e-mail asking if single-threaded Unix applications are doomed," he added.
John Goodacre, the program manager for multiprocessing at ARM Ltd., agreed. "[In 2006] I've seen the market move from positions of confusion and fear over multicore to acceptance and adoption across a broad range of market segments," said Goodacre, whose company has licensed more than 10 chip companies so far to use its ARM11 MPCore multicore processor.
The hardware move is driving a mind shift in software, Goodacre said. "Generally, there is a view that multicore programming is hard and that existing code investments cannot utilize the multicore solution. I expect 2007 to be the year that the software community's current hesitancy toward multicore is dispelled," he said.
The big problems
Now that the industry is awake to the issue, someone better put on a mega pot of joe. Major unsolved problems in computer science lie dead ahead.
"We lack algorithms, languages, compilers and expertise" in parallel computing, said Jim Larus who manages programming and tools work at Microsoft Research.
Asked to identify the big problems to solve, Larus said the short term poses "a lot of practical issues, like developing better support for multithreading, synchronization, debugging and error detection."
"Longer term," he said, "we need a better understanding of what people want to do with parallel programming to learn how to write code across different kinds of parallel machines."
Larus, a professor of parallel computing at the University of Wisconsin for eight years before joining Microsoft, is hiring a new research team to focus exclusively on parallel languages and compilers. His group has already worked on tools to find errors in parallel code. It also developed a program called Accelerator to turn X86 code using highly parallel data sets into programs that run on graphics processors sporting multiple cores.
"The reason [software is] lagging behind [multicore chips] is we don't know what people want to do with these parallel machines yet," Larus said. "A lot of today's applications are not very parallel."
Many companies have legacy code they want to transition to multicore, but don't want to go through the pain of writing the applications. That reality has spawned a lot of talk about automated tools. Companies like CodePlay Software Ltd. (Edinburgh) are developing compilers that aim to make serial code parallel.
But that avenue may be a dead end, Larus said. Researchers spent years in the '80s pursuing such tools for legacy Fortran code. "Everybody agreed it was not the right way to go," he said. "You need to write a program in a parallel language in the first place. The work has to go down to the algorithm level, so what we really need are parallel programming languages."
But researchers have yet to identify any fundamental techniques for parallel programming. For example, MSR's Accelerator project worked well for applications using parallel data structures like matrices. "Futures" constructs that put certain operations in a wrapper so other threads can call them when they are complete are being studied for other applications.
Another promising technique is called lightweight software transactions. It leverages techniques in database processing to create a high-level process that a system must complete without interruption. The method replaces the old, crude and often error-prone approach of using low-level locks to keep parallel processes from interfering with each other, said Larus, who wrote a book on the new method.
"There is a lot of enthusiasm for implementing this, but there are still questions about whether it is a good idea and how it interacts with the rest of the software model," Larus said.
The bigger picture is that no one parallel programming technique or set of approaches is broadly applicable, researchers have found. "Each construct has its own uses and none are universal," Larus said. So "we think the area of concurrency is going to be a hot research area for a number of years."
The good news is that a small group of vendors banded together in January 2006 to form the Multicore Association (MCA). The group, with about 15 members now and 10 more in process, has set up three working groups, one of which is getting close to a spec for a communications application programming interface (CAPI).
The group aims to borrow work done on distributed systems to craft a silicon-level approach to message-passing geared toward multicore chips. It originally hoped it could finish a spec by year's end but is now aiming for the first few months of 2007.
"Multicore programming is a big challenge, and API standards [like CAPI] are a key to addressing this problem," said Agarwal of MIT.
Separately, the MCA is studying whether it can define for programmers a spec that addresses the problems of debugging in a multicore chip without repeating work in existing debug specs. "We don't want to reinvent the wheel," said Markus Levy, president of MCA.
The association also has a work group trying to hammer out a spec for interprocess communications that are independent of operating systems and processors. Initially the group chose the open-source transparent interprocess communications, or TIPC, backed by Wind River Systems. Enea Embedded Technology protested the move after it rolled out Linx, a rival IPC that Enea will offer on an open-source basis.
Levy tried to distance the MCA from the issue, saying much of the work on the IPC technologies will be in the open-source community outside the MCA.
Separately, the Embedded Microprocessor Benchmark Consortium (EMBC) has started work on a standard suite of tests to measure performance of multicore processors. The ad hoc industry group expects the benchmarks could be available in six to eight months.
Industry leaders have been calling for new ways to measure microprocessors that now routinely use multiple cores.
The EMBC effort will initially focus on homogeneous multicore architectures that use multiple instances of the same core. Future efforts may look at processors that use a variety of cores.
The benchmarks will focus on two areas: digital media and voice over Internet Protocol. They may appear as one or multiple suites of tests that will let users plug in their own custom or third-party tests of multithreaded software.
"We've been discussing this for more than a year and just recently started development on it. It's probably a six- to eight-month project," said Levy, who also serves as president of EMBC.
See related image