In general-purpose computing on graphics processing units, or “GPU compute,” certain computations traditionally handled by a system CPU or application processor are offloaded to the GPU. The addition of programmable pipelines, schedulers and floating-point precision to the graphics rendering pipeline enables GPU-compute technology, but until now a lack of system- and software-level support has hindered its progress. That’s changing with the introduction of APIs and parallel-capable programming languages such as CUDA, DirectX compute, OpenCL, OpenGL Shading Language and Renderscript compute.
Offloading inner parallel loops of programs from the CPU to the GPU can improve performance and save power. The ability of the GPU to lower power consumption, as well as to influence the look and feel of displays, the responsiveness of games and the user interface, makes it potentially more important than the CPU.
The addition of GPU compute to the GPU’s established graphics rendering duties are another step in reducing the CPU to just a housekeeping processor or host. Applications already being computed on GPUs include the physics of moving objects as part of scene calculation prior to rendering; applications that can benefit from GPU compute include math functions, 2- and 3-D field solvers, simulators, encryption, sorting and alignment, and some database functions.
A PowerVR Series 5 GPU can compute the physics of the above scene, including carpet movement, as well as rendering the resultant image. Source: Imagination Technologies Group plc
Enablers of the trend include Nvidia, with its graphics chips and CUDA parallel programming platform; the Khronos industry organization, which provides API definitions such as OpenCL and OpenGL; ARM, with its Mali line of GPUs, including versions (the T604 and T658) that have been architected with GPU compute in mind; and Imagination Technologies, with its PowerVR line of GPU cores. — Peter Clarke
@pixies: scary(!) thoughts on extrapolation of networking to higher orders. I have read some what on evolutionary and self-organizing networks but I still consider them dependent on human intervention at least at several phases, for now.
Human beings are already being rendered useless on several fronts with the advances in technology. We are supposed to advance in intellectual thought and their application to work life so we can justify the need for human interaction with processes & tools (in short, work!) but that line of argument seems to be struggling for validation, in some sectors. More automation is rendering human interaction with machines & tools unwanted. I honestly don't know where this stops!
Junko, I will keep you posted. Had a nice chat with the CTO of Mozilla and also have some presentation materials on B2G.
I understand how Mozilla monetizes FireFox but I am still in the dark about B2G's monetization mnodel.
Just wanted to draw attention to CogniVue's - also founding member of EVA - latest Smart Back-Up Camera Application dewarping, object detection & distance estimation running on a single CV2201 processor - 9x9mm2 incl sys mem dissipating ~250mW. How's that for 'powerful, low-cost, energy efficient processors as key enablers of this technology'. Check it out on http://www.youtube.com/user/cognivue/videos
I generally like the list, at least for technology sake... but I think many are solutions looking for problems. One thing is sure, we are networking the heck out of anything and everything! And losing privacy fast!
Join our online Radio Show on Friday 11th July starting at 2:00pm Eastern, when EETimes editor of all things fun and interesting, Max Maxfield, and embedded systems expert, Jack Ganssle, will debate as to just what is, and is not, and embedded system.