Chris Rowen coins the term "cognitive layering," where an IoT class device is usually quiescent, waiting for something to happen, such as a sensor or timer triggering.
An explosion of design options is happening with system-on-chip (SoC) devices for the Internet of things, driven largely by the power demands. We're on the cusp of new products that use less power, says one engineer, Chris Rowen. He should know. As founder of configurable processor intellectual property (IP) company Tensilica and now heading up the IP business of Cadence Design Systems, Rowen thinks about power a lot and has done serious time making circuits run more efficiently.
“System architects are responding to the demands of the flow of power and how we fit all the parts together, and you will see products coming out in the next few months that reflect this strategy,” he told EE Times.
64 bits of data is especially hungry for energy. “There is a huge range in the energy to move 64 bits of data. At the lowest scale there's one transistor, at the other end of the spectrum is the cost in energy for 64 bits of dialogue with a server farm," Rowen said, "In between, there's disk access, register files, applications processors and what you find is a scale of over 12 orders of magnitude in energy usage.”
Because of this, it's natural to push as much processing as local as possible and only go to the cloud when absolutely necessary, he says, and as a result chip architects are thinking much more carefully about locations of access and it has a profound effect on what is happening in IoT.
He coins the term "cognitive layering," where an IoT class device is usually quiescent, waiting for something to happen, a sensor or timer triggering. When activity happens, you move from nanowatts of power to maybe a basic activity such as simple voice activity at 10-30µW. Then you wake up other processors with a few mW of power for more complex recognition of phrases, then wake up the main application processor for full voice recognition, which wakes up the link to the cloud and spends the much higher amount of power needed to do that cloud application.
Cognitive layering for power.
“So when we talk about 'always on' what we means is 'almost always off' – that reflects the fact that you want to push the processing down the stack as far as possible and at the lowest level not even in processors at all. That's where the opportunities are,” he said.
He points to three key trends. Firstly designers will do cognitive layering, even in these low-cost chips, with only just enough processing at the low level. Secondly, power and clock gating of everything to reduce the switching, reducing the amount of time power is applied to the device. Then, there are the architectural opportunities: instruction set optimization to reduce the number of cycles, reducing the energy of the interconnect between the blocks with special purpose paths between the high activity areas and memory partitioning by powering on only enough memory for the application. This system architectural dimension is critically important, he says.
He points to inertial navigation as one of the classic problems for IoT. While low-cost accelerometer and gyroscope chips do a quite good job of knowing exactly where you are, there are some key challenges. You have six noisy sensors all feeding data, sampling 100 to 1,000 sample/s and you need to use the redundant information to filter out the noise using Kalman filtering but this needs high accuracy and high levels of computation.
Chris Rowen, Cadence fellow and founder of Tensilica.
Using a configurable processor took a processor load of approximately 1m cycles per sample down to approximately 1,600 cycles per sample, so the power was driven down to the level of 10s of µW to do this high data rate operation. This is the kind of sensor DSP application that is at the heart of the cognitive layering approach.
The other key factor is in modelling the system more accurately. Cadence has synthesized the Tensilica core and characterized it not at standard cell but using SPICE directly, extracting all the transistors and wires. This gives a more accurate power curve that is getting close to 1µW/MHz because we are not limited by the range of the characterization and can operate at 0.8V +-10% and still be comfortable above the threshold voltage, even when building in sufficient margin of manufacturing variability.
All of this leads to a split between the big, general purpose SoCs pushing the performance and process curve, and the more specialized IoT devices optimized for low cost, low power and very specific functions.
“I think design starts are going to respond to this explosive opportunity” he told EE Times. “SoC design starts will accelerate for new cognitive layering designs that aren't all bleeding edge and it will drive partitioning more and more between 'often on' and 'rarely on.' These will have a design cycle of a few months and a team of maybe dozen people so you can do it for a million dollars – that's the kind of design automation that's possible without giving up flexibility and we are working that out in the different areas right now.”