SAN JOSE, Calif. – The Khronos Group announced version 1.2 of its OpenCL language and APIs for parallel programming on graphics chips. Separately Altera said it is testing a prototype tool to let designers program its FPGAs with OpenCL.
The OpenCL working group "is getting bigger and broader," said Neil Trevett who chairs the group. "The FPGA support is one example that OpenCL is not just for high-end supercomputers but for consumer and embedded systems and the Web community, too," said Trevett who also chairs the Khronos Group that oversees OpenCL, OpenGL and other standards.
The expansion comes at a time when the concept of hybrid computing is sweeping the industry from supercomputers to smartphones. The so-called heterogeneous architectures use a mix of CPU, GPU and other cores.
Version 1.2 of OpenCL is backwards compatible with the previous version 1.1, released 18 months ago. Trevett said the cadence is "about the right frequency" for OpenCL releases.
The new version includes a laundry list of new features, many of them enhancing its interoperability with the Khronos Group's OpenGL standard and Microsoft's DirectX APIs for graphics programming.
One of the most broadly useful parts of the upgrade is a kind of virtualization capability for partitioning a device such as a GPU into separate blocks and assigning specific jobs to the sub-devices. The feature lets programmers improve quality-of-service by dedicating some resources to high priority or latency-sensitive jobs.
With version 1.2, programmers can now treat embedded kernels in a design as if they were designed for use with OpenCL, even if they were not. The feature provides access to blocks with limited or no programmability, once out of the reach of OpenCL users.
Khronos has split off from the OpenCL working group at least two tasks forces that will tackle long term goals for the standard. The task groups will set goals in the next six months for when and how they will release their work.
One group is trying to define ways to support on OpenCL higher level languages that insulate programmers from the sometimes complex details of parallel programming. Nvidia's CUDA environment for parallel programming on its GPUs is already considered more high level and thus easier for programmers inexperienced with parallelism, said Trevett, who also serves as vice president of mobile content at Nvidia. In addition, Microsoft is working on high level parallel programming languages, he said.
A separate OpenCL task force is working to create a standard intermediate representation of OpenCL code. The capability would help video game programmers and other parallel experts write software that is both secure and closer to actual hardware resources.