Computer science is approaching a crisis that some CS experts say could fuel a renaissance of ideas.
Very soon, mainstream computers will need an easy-to-use parallel programming model to tap into performance gains from next-generation multicore processors. But that will require one or more breakthroughs, because top researchers worked unsuccessfully for more than a decade to develop such a model for high-end supercomputers.
In a sign that the issue is now on the radar screen of mainstream computing, Intel and Microsoft last week officially announced their plan to spend a total of $20 million over five years to fund work at new parallel computing labs at the University of California at Berkeley and the University of Illinois at Urbana-Champaign.
The news comes as a related group of researchers has nearly finished the design of an FPGA-based system intended to serve as a vehicle for exploring ideas in parallel computing.
Other companies and government research agencies need to attack the programming issue, which leaders at the new Berkeley lab are calling the biggest problem in the history of computer science.
"This architectural shift will change chips, systems and software, so it has the potential to radically change the industry," said Kurt Keutzer, a professor of computer science at Berkeley working in the new lab. "This shift will change how we write software for everything."
Charles Thacker is the computer pioneer and technical fellow overseeing the small systems architecture team at Microsoft Research that is helping to design the research computer. "What I am hoping is things like [the FPGA-based system] will revitalize research in computer architecture," said Thacker.
A handful of companies are stepping up to the challenge in various ways. Advanced Micro Devices, Hewlett-Packard, IBM and Nvidia are all gearing up new parallel computing initiatives.
One of AMD's chief technologists is now working full-time to drive a parallel computing initiative. HP started a multifaceted effort in its labs about a year ago.
Nvidia is rolling out chips and tools for running processing-intensive vertical applications in areas such as oil and gas exploration on its massively parallel graphics chips. And IBM has started a "cloud computing" initiative that includes work on parallel programming issues.
Others are still ignoring or unaware of the looming crisis. "I still don't see as much concern about this in the applications community, including sectors like CAD and EDA," said Keutzer, who was chief technology officer for Synopsys before joining Berkeley.
Among federal research agencies, the National Science Foundation and the Defense Advanced Research Projects Agency have let past efforts in parallelism for high-end systems go fallow, said Kathy Yelick, another professor of computer science at Berkeley working in the lab.
"It's impressive [that Intel and Microsoft] have a concerted effort. Now it's a question of whether federally funded agencies are putting enough attention here," Yelick said.