BARCELONA — Imagination Technologies announced at Mobile World Congress a new version of its PowerVR graphics core for wearables and embedded systems. It also released a suite of video encoders for the HEVC compression standard, enabling 4K video on mobile devices.
The 400MHz, 2.2mm2 PowerVR G6020 GPU was developed with 28nm technology for low-end mobile devices and high-end wearables. Imagination’s Peter McGuinness, U.S. director of business development, said the two markets have many similarities including screen resolutions of up to 720p and similar usage models.
“At the lower end, people are much more satisfied with a good looking display and a more restricted use case that implements a more attractive user interface,” he told EE Times.

Imagination dropped power consumption with the G6020 by reconfiguring a standard shading cluster designed for high end gaming to favor more pixels and less shading. The result is a chip that requires less memory and processing in a smaller footprint.
The GPU supports OpenGL ES 3.0 and maximizes bandwidth efficiency by using texture compression technology. It does not support general purpose GPU computing which McGuinness said is not needed in the core’s targeted products such as enterprise printers, point-of-sale terminals and car dashboard displays.
“We’ve taken features that are really, really needed at this level and implemented only those features,” McGuiness said, adding that co-processors are also absent from the G6020.
While early wearables used older mobile CPUs and GPUs, current designs are becoming optimized for the lower processing and graphics requirements, Linley Group analyst Mike Demmler told EE Times. “[Imagination’s] new GPU provides a lower-power version of the Series 6 Rogue architecture, with a less powerful shader core that meets the needs of lower-resolution displays.”
Next page: Encoders bring HEVC, 4K to mobile

I'm actually disappointed in the lack of general-purpose compute capability. Workloads that require heavy use of signal processing or machine learning algorithms could benefit tremendously from a more open GPU or parallel processor.