The pace of change of in IC manufacturing is becoming so fast that ideas and techniques may struggle to last without significant re-invention. What next for big-little?
Rather than the two-core big-little idea ARM may need to work on a finer-grained approach; a whole nest of ISA compatible cores optimized for different levels of power saving and performance. Could it have to evolve to a "biggest-big-little-tiny" strategy?
But what about the real-estate; is a complex hierarchy of tuned ISA-compatible cores all waiting to perform at a particular point in the power-performance curve too expensive?
Maybe, maybe not. We should also remember that big-little is also partly a response to the ARM idea of "dark silicon," which is based on the idea that for power consumption and thermal reasons advanced processors could not afford to have all the silicon in operation at the same time, because the chip would simply burn up. So, the argument runs, you may as well use portions of the IC optimized to different loads and use cases. The counter-argument is that rather than design a complex IC that has to be predominantly dark, I would rather have simpler chip that is easier to design and less costly to make.
One alternative is to find a manufacturing process that can support a wider range of DVFS.
In fact such a process already exists in the form of the 28-nm fully-depleted silicon on insulator (FDSOI) manufacturing process from STMicroelectronics. This can take operation down to around 0.6-V compared with bulk CMOS, which is generally limited to lower end minimum of about 0.9-V. The use of back-biasing of the FDSOI wafer provides a broad dynamic-scaling of performance which should be a good fit for the big-little architecture.
In fact ST-Ericsson has already used the 28-nm FDSOI to market the idea of a "quad-core" processor, the L8580 ModAp, based on two physical Cortex-A9 cores. ST-Ericsson's argument is that the Cortex-A9 can be operated at low-voltage so save dynamic power like a "little" core or pushed to high clock frequency for performance like a "big" core; a virtual big-little.
I don't much like the idea of measuring a chip by the number of virtual cores, which seems to be a trend right now, because of course that number is arbitrary. I prefer to count the cores in three-dimensional physical space.
But nonetheless the intelligent combination time-slicing, extended DFVS and multiple physical cores are likely to be one way forward for big-little in a multidimensional extension of what we already have.