BERKELEY, Calif. Researchers gave an update Thursday (Feb. 11) on their work to find new programming models for tomorrow's many-core processors at an annual event at the University of California at Berkeley. In addition, Berkeley announced two new research centers—one focused on low-power circuits and another on cloud computing.
Separately, researchers here called for a new class of power engineers to deal with emerging energy challenges. The field needs cross-trained experts to bring the lessons of the Internet to tomorrow's smart electric grid, they said in a panel discussion at the event.
Kurt Keutzer, a professor working at Berkeley's Parallel Computing Lab, described more than half a dozen applications researchers have written using a new parallel methodology called Pallas. In addition, work has begun on a more general framework for parallelism called Copperhead that could be used by a wider group of programmers, he said.
The Parallel Lab and a similar lab at the University of Illinois was launched in December 2008 with a $20 million grant from Intel and Microsoft to find programming models to harness CPUs packing dozens of cores. "We know we can build such chips, the question is can we program them," Keutzer said.
Keutzer reported results on seven high-performance applications using the Pallas methodology. In the approach, graduate students implement complex algorithms from domain experts. They begin by creating software architectures specific to the algorithms that maximize their use of computational and structural patterns, then map those architectures on to parallel processors.
Using this approach, one team created a program that reduced the time needed to create an MRI image from one hour to one minute. The code is already being used at a local children's hospital.
In another example, the approach reduced the time to handle object recognition from 222 seconds on an Intel Nehalem processor to 1.8 seconds on a massively parallel Nvidia GTX 280 chip. Other efforts in areas including speech recognition, option trading and machine learning showed results ranging from 11 to 100-fold performance gains.
"But we can't produce at Berkeley enough of these [expert] parallel programmers to create all of tomorrow's applications, so they will also work on creating programming frameworks" less expert programmers can use, Keutzer said.
The Copperhead framework being co-developed with Nvidia is focused on generating fast executable code for data-parallel applications. It will work both with Nvidia's Cuda and the OpenCL environments.
So far, that work is still in an early stage. "Certainly, a year from now I'll have something to show that hopefully convinces you this is a promising approach," Keutzer said.