Nvidia proposes GPU programming standard
11/14/2011 12:22 PM EST
SEATTLE, Wash. — SC11 — Nvidia Corp., Cray Inc., The Portland Group, Inc. (PGI) and CAPS have unveiled ‘OpenACC’, a directives-based programming initiative the firms hope will become an industry standard for parallel computing.
The approach aims to facilitate the acceleration of applications on both CPUs and GPUs, without modifying the underlying code, making it easier for scientists and researchers to efficiently use parallel programing on heterogeneous CPU/GPU systems.
OpenACC allows parallel programmers to give simple hints –or “directives”-- to the compiler, which in turn identify which parts of the code to accelerate, without having to change the underlying code itself. The directives then allow the compiler to map the computation onto the accelerator.
Initially, existing compilers from Cray, PGI and CAPS will support OpenACC, which is also compatible with Nvidia’s CUDA parallel programming architecture, as well as being multi-platform and multi-vendor compatible. Nvidia’s competitor in the GPU space, as well as a provider of high powered CPUs, AMD, is not part of the initial OpenACC initiative, though Nvidia says it hopes other firms will be quick to adopt it as a standard.
Nvidia believes OpenACC will most benefit the scientific community, especially chemistry, biology, physics, data analytics, intelligence and climate researchers who may not have enough funding or computational expertise to port their code over to GPU architectures.
“We’re so confident about the approach, we’re launching an initiative called ‘2x in 4 weeks’,” said Nvidia’s Sumit Gupta, noting that developers using OpenACC had reported 2x to 10x increases in application performance in as little as two weeks when using existing directive-based compilers.
“People are sometimes afraid to try GPUs, but once they try them out, they realize it’s much easier than they initially expected,” he added.
The initiative has already managed to garner support from some of the bigger names in supercomputing, including the Oak Ridge National Laboratory which hopes OpenACC will be useful in the continued build-out and deployment of its Titan GPU-accelerated supercomputer - expected to become the world’s fastest supercomputer.
“Our ultimate goal is to have all Titan supercomputing code run on hybrid CPU/GPU nodes, and OpenACC will enable programmers to develop portable applications that maximize the performance and power efficiency benefits of this architecture,” said Buddy Bland, the Titan project’s director.
The initiative’s openness, flexibility and portability across multiple platforms was deemed especially important by Jeffrey Vetter, joint professor in the Computational Science and Engineering School of the College of Computing at Georgia institute of Technology, saying OpenACC represented “a major development for the scientific community.”
While some may indeed find it useful, others will argue the initiative is too narrowly focused and backed to really compete with OpenCL, the broad market alternative to CUDA for general purpose parallel processing on GPUs.
Developers interested in trying OpenACC out for themselves are being offered a one-month free trial of the PGI Accelerator Fortran and C compilers from the Nvidia website, or directly from Cray.