The coming 20 nm and 16 nm FINFET will shake up the whole SoC and IP supply chain, says Tony King-Smith, executive vice president of marketing at Imagination Technologies at the International Electronics Forum in Dublin.
Imagination sees the GPU as the process driver, which is natural considering its heritage of the PowerVR 3D graphics technology, but also sees that the GPU is the scalable multi-core compute engine of choice for SoC, rather than the processor core. With many repeatable elements that support redundancy, the GPU is less vulnerable to process variability that will dominate SoC at 20 nm and 16 nm for the next few years.
But for Imagination the supply chain and ecosystem extends much further than just the silicon or even the embedded software. This emphasis on the GPU as the scalable element also drives changes all the way up to the user interface. Hence the announcement this week that Imagination is teaming up with Rightware on user interface design and benchmarking.
But the need for close collaboration for the SoC goes all the way up the chain, through the UI software to the portal software, such as Imagination’s Flow that runs in the cloud. All of this does impact the SoC design, which is somewhat counterintuitive.

He points to the increasing split in the mobile market between ARM and other processor architectures — all running different versions of Google’s open source Android operating system. Intel is having some success with its x86-based Atom processors in mobile handsets and tablets, while Imagination is combining its MIPS cores with its PowerVR graphics and video technology and its programmable radio front end. Coupled with the move to technologies such as ray tracing, this creates a widely varying set of requirements, he says.
This will mean apps have to be able to traverse CPUs in the same way they do across GPUs and radios, he says, breaking the dependency on the CPU instruction set architecture. One way to do this is to use the higher levels of the ecosystem. Instead of downloading an app for a particular ISA, a generic discovery app is downloaded to investigate what hardware resources are available.
Once the discovery app determines the hardware available, it downloads different optimized blocks for the different hardware elements, creating the optimal software. However, this is not simple to do, and more R&D is needed that links the cloud, the UI, apps and the SoC.
Putting all this together with the right balance of memory and performance determines the power consumption and performance of the system. “This is a very key area,” he says. The attention to detail makes a huge difference, with four to five times the performance difference using exactly the same set of IP.
“This is the most exciting decade I have seen for computing,” said King-Smith. I’m inclined to agree.

You wrote:
Once the discovery app determines the hardware available, it downloads different optimized blocks for the different hardware elements, creating the optimal software.
Wow, if this can be really done, that is awesome!
I remember back in the old days when PC video game developers wrote "to the metal," which made it difficult to transport games from one graphics chip platform to another. In those days, literally, everybody talked about the need for Open API for graphics. (Well, I am talking about the days before OpenGL became virtually accpeted by everyone...)
But in the world of mobile, as you pointed out, with all the fragmentation going on with Android, and different CPUs and not to mention multi-cores, it appears that things have gotten even more complicated.
This is the space very interesting to watch.