In general-purpose computing on graphics processing units, or “GPU compute,” certain computations traditionally handled by a system CPU or application processor are offloaded to the GPU. The addition of programmable pipelines, schedulers and floating-point precision to the graphics rendering pipeline enables GPU-compute technology, but until now a lack of system- and software-level support has hindered its progress. That’s changing with the introduction of APIs and parallel-capable programming languages such as CUDA, DirectX compute, OpenCL, OpenGL Shading Language and Renderscript compute.
Offloading inner parallel loops of programs from the CPU to the GPU can improve performance and save power. The ability of the GPU to lower power consumption, as well as to influence the look and feel of displays, the responsiveness of games and the user interface, makes it potentially more important than the CPU.
The addition of GPU compute to the GPU’s established graphics rendering duties are another step in reducing the CPU to just a housekeeping processor or host. Applications already being computed on GPUs include the physics of moving objects as part of scene calculation prior to rendering; applications that can benefit from GPU compute include math functions, 2- and 3-D field solvers, simulators, encryption, sorting and alignment, and some database functions.
A PowerVR Series 5 GPU can compute the physics of the above scene, including carpet movement, as well as rendering the resultant image. Source: Imagination Technologies Group plc
Enablers of the trend include Nvidia, with its graphics chips and CUDA parallel programming platform; the Khronos industry organization, which provides API definitions such as OpenCL and OpenGL; ARM, with its Mali line of GPUs, including versions (the T604 and T658) that have been architected with GPU compute in mind; and Imagination Technologies, with its PowerVR line of GPU cores.
— Peter Clarke