Processor intellectual property licensor ARM Holdings plc has gained two additional licensees for big-little approach to multiprocessing bringing the total number of partners to 16 and all are trying to use the technology to get design wins in mobile applications.
The number was revealed by CEO Warren East in a presentation to financial analysts to provide background to ARM's fourth quarter financial results. East said he is not concerned that the number might represent too many chip vendors competing in an intensely competitive sector.
In the analysts' conference East said ARM is seeing its big-little technology being deployed in smartphones and tablets but not yet in another market segments. He also explained that where an application has a more consistent load or computation profile, big-little may not be beneficial.
East explained that big-little – where a power-optimized processor core is paired with a performance-optimized core as part of a dynamic voltage and frequency scaling regime – produces the best results where there is a large range of computational loads, something that is typical of smartphones and tablets. He added that in time he expected the technology to "trickle down" to entry-level smartphones as the technology becomes lower cost.
"We will see silicon this year. It's not going to be massive volume. This year will be the start of big-little," East told EE Times.
However, East denied that 16 licensees were too many in the mobile sector. "We never try to second guess who the winners and losers are," he said. "In most of the spaces where ARM is present we have from a handful to about twenty licensees. So 16 is pretty well penetrated. But in the mobile space even since 2010 we have seen new companies coming through."
ARM's BIG-little architecture is not good enough for the longer run because of decreased area efficiency on chip.
As the time progresses more than 8 cores is needed on chip and with increased capacity to run more threads per chip.
So, how does BIG-little approach really helps in having many-many, too-many cores probably within this decade !
Remember ARM cores are tiny compared to say x86 cores (especially Atom). The little core is much smaller than the big core and can also have a smaller L2 as it targets lower performance. So the overhead is pretty small. Given that a SoC contains a lot more than just CPUs, total die area overhead is likely less than 10%.
In principle you could match 8/16 big cores with fewer little cores. However that many cores isn't useful for tablets and mobiles, so I don't see your issue.
Mobile being the future of general (public) purpose computing which will be taken away from the PC(IMO, this decade), more & more ARM cores are needed from multi-threaded perspective and the Android/IOS/WINCE ... will get big & complicated. Hence more than few ARM-like cores are needed for responsive usage of mobile devices.
- OS , 2 cores
- App(s) , 2 cores
- Others when needed (video, audio, ...), 2-4 cores.
- Some math might need - 2-4 cores
Hence total of 8 cores or more are needed probably by 2020 IMO.
Hence designing big A15 w.r.t A9 means , future cores of ARM will be much bigger in size unless foundry node(s) becomes very-very small to counter weigh the increase in core/pro design size .
BIG-little design approach is in serious trouble in about 10 years time frame IMO.
For now, customers are requested to start using more mobile device(s) than PC with more usage time/day.
1 A7 core is only 0.5% of a chip. 4 cores is 2%. That's an almost irrelevant size. For 2% extra you get the overall device to be TWICE as efficient, while still allowing for much higher performance than anything on the mobile market.
I don't agree.. give the tiny since of the cores (particularly the "extra" cores like the A7), that's not so much space. CPU cores are often tiny, anyway, compared to memory and GPU on a typical SOC.
With that said, I do wonder if software won't also evolve. The big/LITTLE idea is that processes seamlessly get scheduled from big to LITTLE as the load changes... but if you put in six big cores, does it 6-8 cores, does it still make sense to keep the 1:1 mapping? Of course, ARM is making this easy on the SW folks today.
nVidia did something more like what I'm talking about in their 4+1 design on the Tegra 3. This seems to be a dramatic improvement, at least for standby vs. full operation. My previous Android tablet would run down overnight, on standby. The Transformer can sit for days and still be at near-full power, but yet instant-on. Probably some power savings evolution in the OS too, but I definitely believe in the concept, having seen it. I think big/LITTLE intends to make this a full-time thing, not just for standby.
But that's not how real SMP systems work... you don't lock problems to cores, the tasks are allocated as needed based on priority. And ideally, if there's no work to do, you can shut down some core, even now... or, as in the article, shunt them to much lower power CPUs.
I'm not actually sure there's NEED for more cores. I don't see Apple moving beyond two cores, at least not until they have more true SMP in their OS. Applications will eventually demand more computing power, but as with Intel, there are different ways to meet that demand, multi-core just one of them.
You're right mobiles and tablets will become the PCs of the future. But if you look at today's PC's most are dual and few are quad core. Are there any 8-core PCs? No. So why do you expect 8 core mobiles?
I'd say that big/little becomes more beneficial over time as cores become faster, larger and more power hungry.
nVidia's 4+1 idea is actually fairly similar to big/little, but using a low power process rather than a smaller core. at least in Tegra 3 there isn't much saving unless you need very low performance (active standby). However the 2 approaches could be combined.