SEATTLE--SUPERCOMPUTING 2011--Disruptive technologies like the GPU are important steps on the path to exascale computing said Nvidia Corp.’s CEO Jen Hsun Huang in a keynote at SC11 on Tuesday (Nov. 15).
With supercomputing already an essential tool in modern science, Huang said the industry’s work in the space was “vitally important to society and the advancement of culture,” but that reaching exascale was something of an innovator’s dilemma.
The dilemma, theorized by last year’s SC keynote speaker Clayton Christenson, claims disruptive technologies are not initially deemed valuable to the markets they eventually serve, and therefore face an uphill battle for acceptance. GPUs, argued Huang, are just such a disruptive technology, with their roots as graphics engines invented specifically for teenage gamers.
As the market grew, however, GPUs found utility first in workstations and recently as accelerators in some of the world’s fastest supercomputers.
While accelerating systems is helpful, however, Huang said the industry needed to use GPUs to push even further into the future, especially as power constraints were becoming an imperative.
“Supercomputing is now power limited, just like the notebook, the tablet and the cell phone,” said Huang, asking what would happen if supercomputing centers capped their power usage at 20MW. By Huang’s estimates, such capping would mean being unable to achieve exascale before 2035, though Intel Corp. disagrees with that prediction.
Huang posited that the way to solve power efficiency issues while continuing to improve on performance was indeed through the GPU, which he called “much less complex” than the CPU.
CPUs, said Huang, waste inordinate amounts of energy to schedule instructions and move data across the chip, while GPUs were simpler with minimal overhead. While true that graphics processors are not optimized for single threaded performance, they do have IEEE floating point compatibility, leading many researchers to wish they could describe all their problems as a triangle, joked Huang.
“Energy efficiency is job one. The benefits of getting there are really quite wonderful,” he said adding, “We think in power envelopes now. It is the most important characteristic.”
Nvidia also unveiled its new Maximus system offering advanced graphics processing inside a single workstation, boasting not just a Quadro card but a Tesla card to boot. This allows users to simultaneously handle interactive graphics and compute-intensive number crunching for simulations, on the same system.
Previously, users had had to either use separate systems or carry out the steps consecutively.
The Maximus system automatically assigns work to the right processor, be that the Quadro GPU or the Tesla C2075, said Huang.
NVIDIA Maximus-enabled applications include products from Adobe, ANSYS, Autodesk, Bunkspeed, Dassault Systèmes and MathWorks and systems are available as of today from HP, Dell, Lenovo, and Fujitsu.
Huang did not spend any time during his keynote discussing the recent announcement from BCN to build a supercomputer using Nvidia’s ARM-based Tegra chips, nor did the CEO mention the next generation Kepler GPU.
Nvidia has a compute model to push. of course they'll claim it's better than the alternatives, but is there any reason we have to keep seeing these claims parotted in the media without support?
the GPU model is inherently quite data-parallel. it's ideal for certain kinds of computation, but clearly not all. what's the amount of independent computation a thread performs before it needs to interact with other threads, or to operate on general-purpose memory? if threads are basically lockstep and operating on their own state, you will be happy with GPUs. if not - if your dataflow is more complicated, if your computation is more conditional, if your problem is so large it needs many machines, then GPUs simply won't suit, and you'll use MPI. whether the MPI is on BG-Q, the K machine or Intel MIC doesn't matter that much.
there is no future in which GPUs totally win, since what defines them is their restrictive data/compute model. an interesting question is whether such a restrictive model is necessary to achieve power-efficiency (and MIC is nothing less than Intel's bet against that proposition.)
Join our online Radio Show on Friday 11th July starting at 2:00pm Eastern, when EETimes editor of all things fun and interesting, Max Maxfield, and embedded systems expert, Jack Ganssle, will debate as to just what is, and is not, and embedded system.