SEATTLE, Wash. — SC11 — Nvidia Corp., Cray Inc., The Portland Group, Inc. (PGI) and CAPS have unveiled ‘OpenACC’, a directives-based programming initiative the firms hope will become an industry standard for parallel computing.
The approach aims to facilitate the acceleration of applications on both CPUs and GPUs, without modifying the underlying code, making it easier for scientists and researchers to efficiently use parallel programing on heterogeneous CPU/GPU systems.
OpenACC allows parallel programmers to give simple hints –or “directives”-- to the compiler, which in turn identify which parts of the code to accelerate, without having to change the underlying code itself. The directives then allow the compiler to map the computation onto the accelerator.
Initially, existing compilers from Cray, PGI and CAPS will support OpenACC, which is also compatible with Nvidia’s CUDA parallel programming architecture, as well as being multi-platform and multi-vendor compatible. Nvidia’s competitor in the GPU space, as well as a provider of high powered CPUs, AMD, is not part of the initial OpenACC initiative, though Nvidia says it hopes other firms will be quick to adopt it as a standard.
Nvidia believes OpenACC will most benefit the scientific community, especially chemistry, biology, physics, data analytics, intelligence and climate researchers who may not have enough funding or computational expertise to port their code over to GPU architectures.
“We’re so confident about the approach, we’re launching an initiative called ‘2x in 4 weeks’,” said Nvidia’s Sumit Gupta, noting that developers using OpenACC had reported 2x to 10x increases in application performance in as little as two weeks when using existing directive-based compilers.
“People are sometimes afraid to try GPUs, but once they try them out, they realize it’s much easier than they initially expected,” he added.
The initiative has already managed to garner support from some of the bigger names in supercomputing, including the Oak Ridge National Laboratory which hopes OpenACC will be useful in the continued build-out and deployment of its Titan GPU-accelerated supercomputer - expected to become the world’s fastest supercomputer.
“Our ultimate goal is to have all Titan supercomputing code run on hybrid CPU/GPU nodes, and OpenACC will enable programmers to develop portable applications that maximize the performance and power efficiency benefits of this architecture,” said Buddy Bland, the Titan project’s director.
The initiative’s openness, flexibility and portability across multiple platforms was deemed especially important by Jeffrey Vetter, joint professor in the Computational Science and Engineering School of the College of Computing at Georgia institute of Technology, saying OpenACC represented “a major development for the scientific community.”
While some may indeed find it useful, others will argue the initiative is too narrowly focused and backed to really compete with OpenCL, the broad market alternative to CUDA for general purpose parallel processing on GPUs. Developers interested in trying OpenACC out for themselves are being offered a one-month free trial of the PGI Accelerator Fortran and C compilers from the Nvidia website, or directly from Cray.
Without full support for all the GPUs/CPUs out there I find it hard to see this effort succeeding. It looks like an attempt to push forward their own platforms with an "open" standard. It would be most interesting if OpenCL starts to provide a similar set of parallel computing code indicators, then we could see how things progress.
Co-mingling the serial and parallel code is not an efficient use of programmer time. Look to much _higher_ level constructs like Aparapi or Clyther. Pat Hanrahan has done awesome work in domain specific languages targeting GPGPU. High level tools should target the OpenCL runtime but programmers should, for the most part no operate at that level.
"Nvidia says it hopes other firms will be quick to adopt it as a standard"
Yup. Like how Nvidia put so much effort into supporting OpenCL.
It is obvious that folks who have huge computation requirements will support anything that will lead to cheaper computation for them.
What is also obvious to someone who has done programming in this space is that widespread use of this kind of computing depends on *open* tool chains where getting screwed by a single vendor is not going to happen ("been there, done that")
How is this different from OpenCL, Which is an open standard being worked on / maintained by Khronos ??
If this OpenAcc is a competitor for OpenCL, then i hope and pray that OpenAcc dies a quick and very very painful death.
OpenACC is an Open Source Initiative or it is just the name using word Open?
It if is open source it will be adopted by the entire industry otherwise it will not be adoptable easily by the other companies.
Join our online Radio Show on Friday 11th July starting at 2:00pm Eastern, when EETimes editor of all things fun and interesting, Max Maxfield, and embedded systems expert, Jack Ganssle, will debate as to just what is, and is not, and embedded system.