London More universities should provide courses on programming for massively parallel computing, and more graphics processor providers should look at enabling the use of the Compute Unified Device Architecture (Cuda) programming language on their devices, according to David Kirk, chief scientist at Nvidia Corp. (Santa Clara, Calif.).
Nvidia developed Cuda to run on its own GPUs and has found considerable traction for the language in applying those GPUs to applications for financial and other multivariant analyses. Indeed, some observers have argued that Cuda is starting to influence academic debates over parallel-processing languages and hardware architectures.
"Massively parallel computing is an enormous change, and it will create drastic reductions in time-to-discovery in science because computational experimentation is a third paradigm of research, alongside theory and traditional experimentation. If you can make one of these paradigms much faster, it will have a tremendous impact," Kirk said during a lecture for an invited audience last month at Imperial College London.
Kirk believes massively parallel computing can democratize supercomputing. "For $2,000 per teraflops, anyone can have a single-precision supercomputer in their PC," he said. "Very soon we will have double-precision floating-point designs, and I predict that within less than two years you will see pentaflops clusters of double-precision floating-point designs for under $5 million.
"This is really mass-market supercomputing. It takes cost as the main barrier out of computing for science. This is a once-in-a-career opportunity for people in the computing industry."
Kirk concedes there is a lot more to be done. "We need much more research in parallel-computing models," he said, and "Cuda is great but is not the final answer. We need more research in parallel architectures and hardware now that we realize that everything has to be parallel."
At the University of Illinois, Kirk teaches a class for scientists and engineers on employing massively parallel processors for general computation. At present, around 100 universities worldwide provide courses based on the Illinois material. "We need to teach massively parallel computing to everybody in science, not just computer scientists and electrical engineers," he said.