SANTA CLARA, Calif. – Improving energy efficiency is the way the electronics industry must go, but it is important to also take a system-level view of issues, according to Professor Jonathon Koomey, recently appointed as research fellow at the Steyer-Taylor Center for Energy Policy and Finance at Stanford University.
Koomey provided a second-day keynote presentation at the ARM TechCon conference and exhibition here that looked at measuring efficiency in electronic systems and then went on to show by anecdotal examples how applications based on distributed sensing and processing could help improve society. One lesson taught by Professor Koomey is that the energy-saving benefits usually come from moving bits rather than atoms.
Professor Koomey started by flattering his audience, saying that they, by being in the audience at ARM TechCon, clearly understand that the major issue going forward is how to build low-powered or self-powered systems.
He also gave them the good news that, to date, miniaturization has yielded not only higher performance transistors but also ones that consume less energy to switch.
He referenced his own work highlighting long-term trends in improving energy efficiency in computing and mobile phones. In the case of computing, there has been a doubling of energy efficiency every 1.5 years throughout the PC era, although the trend line extends back into vacuum tube era. "Since the 1940s there has been a 100x improvement every decade," he said. However, he acknowledged that these plots do not take into account the important issues of system power consumption when idle.
In most systems, idle power consumptions and the speed of transition to and from active mode are more significant than active mode power consumption, he said. "Of course you want to reduce both," he said.
Click on image to enlarge.
Professor Jonathon Koomey engages with a packed audience at ARM TechCon.
Professor Koomey also referenced work in progress on improving energy efficiency in cell phones. As with computer function, choosing what to measure is important. Professor Koomey said he has an assistant plotting the efficiency of 500 mobile phones going back to a Motorola unit from 1984, the metric being talk-time per watt-hour. The statistics so far show an improvement of about 8 percent per year, although of course modern cell phones do much more than enable voice calls.
The biggest problem today is heat extraction from inner layers. Once extracted to the surface, heat removal is no different from today's chips. But since the total system dissipation is significantly reduced from the 2D case, the overall system cooling requirements are reduced for the same design.
The physics of heat removal requires your conduction path not to heat up. So power line is not the way. You need something like the a/c where heat is extracted from the conduction path itself, by circulating cooling fluid, most simply air. But the power consumption for cooling must then be taken into account.
Good point @Zeev00, Feyman predicted that long time ago ("there is plenty of room at the bottom". Again using biological analogy: our brain is a 3D device...but we need to reduce power before going 3D, else there is no way to solve heat extraction problem...reducing computational accuracy, or moving to analog computing (as the brain does) would be useful to accomplish that...Kris
The only path to systematically reduce power seems to be to go 3D. I am not talking about the TSV 3D path, but the monolithic 3D one. With monolithic 3D integration the interconnects are shorter, and hence their capacitance and power; the off-chip drivers and their large power are gone; and heterogeneous integration saves yet another high-power chip-crossing signalling.
"Approximate data can reduce the computing load significantly. We have to start looking at the way we do computing from a system point of view." - I think is the key takeaway from that talk...our computing is too exact, we calculate everything with 32 or 64 bit accuracy...no need...look at our brains, work much better (pattern recognition for example) despite being much slower
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.