LONDON – Graphics processor IP licensor Imagination Technologies Group plc is working with Vector Fabrics BV to apply parallelization to software to be distributed across application processors that include PowerVR SGX graphics cores.
The use of the OpenCL-based vfEmbedded code development, analysis and parallelization tool will be demonstrated on the booth of Imagination (Kings Langley, England) at the Siggraph exhibition coming up in Vancouver August 9 to 11, according to Vector Fabrics (Eindhoven, The Netherlands).
The two companies consider that PowerVR graphics cores can not only render graphics for display but can use spare resources to deliver additional computation power. The use of OpenCL can provide a boost to certain algorithms and applications, Vectof Fabrics said.
The use of vfEmbedded can help application developers increase the performance and lower the power consumption of advanced application processor platforms, Vector Fabrics said.
"Upgrading a sequential program to run efficiently on a parallel GP-GPU architecture is not an easy task. Programmers require intimate application knowledge, must learn new programming models and concepts, and avoid introducing hard-to-find bugs," said Mike Beunder, CEO of Vector Fabrics, in a statement. "Our vfEmbedded tool alleviates the developer from these tasks, greatly assisting the process of modifying sequential code to utilize fast and efficient OpenCL kernels that take maximum advantage of PowerVR GPUs."
Tony King-Smith, vice president of marketing at Imagination, said: "We believe vfEmbedded is a great example of a new wave of tools now becoming available to developers to drive the next wave of GPGPU-based algorithm development."
Hmm, I was under the impression that these GPU's were inherently parallel and so were the software libraries they used. It appears not so and well, are they facing the same challenge as general purpose software?
I thought that GPUs were heavily pipe-lined to boost performance due to repetitive operations on blocks of data. Maybe I misunderstood the GPU concept early on? Anyway, code designed to run sequentially on data is hard to convert to parallel code unless the data operations are repetitive and independent of neighboring datums. When I worked on an array processor (when the earth was still flat) there was a lot of effort spent on library routines that optimized the FFT code based on the available hardware (2 alus, 1 mult). This provided great speed but at the cost/tradeoff of specialization. I wonder how vfEmbedded tool works?
Most likely this piece of news means that Imagination are out-sourcing the OpenCl "compiler" or some other link in the chain "OpenCl code"---"PowerVR assemblies" to this Vector Fabrics company.
Of course, when you talk about OpenCl software you may not speak about sequential programming at all :), so that statement doesn't make a lot of sense anyway.
And yeah, GPUs are heavily parallel architectures. I'm not familiar with PowerVR's architecture at all but since AFAIK is targeting the embedded space we may suspect that it has more fixed function units than your average GPU (since FF would gain then energy consumption advantage). Hence the added challenge challenge for efficient OpenCl support
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.