The exascale challenge is a top tech topic at this week's Supercomputing 2010 conference in New Orleans where members of the DARPA program will discuss their work. A separate panel of experts will discuss the need for a new model of computing to overcome "a severe barrier of parallelism, power, clock rate, and complexity" ahead for today's multicore processors.
Another group of veteran supercomputer researchers will debate whether designers will be able to hit the exascale goal in 2018. "While not impossible, this will require radical innovations," the panel description said.
"You could run a small town on the power required to run one of these supercomputers and even if you plump for a design and power the thing up, programming it is currently impossible,” said Stephen Jarvis, a professor of computer science at the University of Warwick who will deliver a paper examining the limits of today's best architectures.
Dally of Nvidia was upbeat about his team's approach which he will detail in an invited talk. "It's still a research problem and there are many unknowns we have to solve, but I am an optimist," Dally said.
The graphics processors used in today's supercomputers consume 200 picojoules per floating point operation. That's an order of magnitude better than traditional x86 cores used in the same systems, but it still needs a factor of ten improvement to meet the DARPA goals, he said.
Process technology will get designers part of the way there, but significant advances are still needed in chip architecture, he said. New memory hierarchies, better ways of organizing cores and faster links to DRAM could make up the difference.
"Preliminary [simulation] results are good on some of the concepts we have developed," said Dally. "We are pretty confident we will exceed the program goals," he said.