Most likely this piece of news means that Imagination are out-sourcing the OpenCl "compiler" or some other link in the chain "OpenCl code"---"PowerVR assemblies" to this Vector Fabrics company.
Of course, when you talk about OpenCl software you may not speak about sequential programming at all :), so that statement doesn't make a lot of sense anyway.
And yeah, GPUs are heavily parallel architectures. I'm not familiar with PowerVR's architecture at all but since AFAIK is targeting the embedded space we may suspect that it has more fixed function units than your average GPU (since FF would gain then energy consumption advantage). Hence the added challenge challenge for efficient OpenCl support
I thought that GPUs were heavily pipe-lined to boost performance due to repetitive operations on blocks of data. Maybe I misunderstood the GPU concept early on? Anyway, code designed to run sequentially on data is hard to convert to parallel code unless the data operations are repetitive and independent of neighboring datums. When I worked on an array processor (when the earth was still flat) there was a lot of effort spent on library routines that optimized the FFT code based on the available hardware (2 alus, 1 mult). This provided great speed but at the cost/tradeoff of specialization. I wonder how vfEmbedded tool works?
Hmm, I was under the impression that these GPU's were inherently parallel and so were the software libraries they used. It appears not so and well, are they facing the same challenge as general purpose software?