Hidden behind the recent good news in the semiconductor industry is a developing crisis that I fear will spread across the entire electronics industry within a decade.
Moore’s Law is not really a law at all; as Dr. Moore himself regularly reminds us, it is merely a social contract between the semiconductor industry and its customers to keep technology moving forward at an exponential rate. There is no intrinsic reason why Moore’s Law must continue. In that case, why do we abide by it? Because whatever the financial costs of keeping up with Moore’s Law, the social costs of not doing so would be far greater.
For 50 years, ICs have been powering the world’s products and services, thereby driving the world economy. Moore’s Law doesn’t just describe the pace of innovation in the semiconductor world, but that of life in the modern world. What’s the price for this extraordinary pace of change? It may surprise you to learn that the answer is right in front of us.
The standard definition of Moore’s Law is that computer chips double in density every two years, but in fact it is much more complex. The law has several dimensions that can be described as different axes of change:
• Size (density). The amount of surface area you have to dedicate to a specific number of transistors gets smaller by the year.
• Performance. The same-sized chip will get ever more powerful in terms of memory storage, computation speeds, etc.
• Price. As you make chips smaller, the price will get cheaper; if you rearchitect your design to leverage the inherent speed gain of individual transistors, the price can be reduced even faster. Nevertheless, increasing performance has been the main focus for decades.
Over the past 20 years, the emphasis has been on increasing chip density while keeping the chip size relatively constant and maximizing performance and integration. More recently, the process of making more-complex individual processors with ever smaller transistors became prohibitively expensive. Faced with that new reality, the big processor companies shifted to multicore designs; it worked, but the cost was to abandon one of the three dimensions—size—probably forever.
The good news is that Moore’s Law is still intact. We can maintain the pace, but we are at a much greater risk of breaking down.
Or are we?
Let me suggest there is in fact a fourth dimension hidden in Moore’s Law that could keep it as strong as ever and may even extend its lifespan: efficiency. Efficiency explains how, in just 50 years, we could progress from building-sized corporate mainframes, requiring their own power grids and refrigeration, to laptops capable of even greater performance.
Several decades ago, efficiency wasn’t particularly interesting. We told ourselves that the average chip consumed just a few watts, and even as it gradually increased to 100 to 200 W individually, it was a minuscule amount compared with light bulbs and home appliances.
But by allowing our industry to become one of the biggest energy sinks on the planet, we have violated the social contract of Moore’s Law. This is unconscionable. We are in the semiconductor business to make the world a better place, not worse; to drive progress, but not at the environment’s expense.
Simply put, we have to initiate change and rethink Moore’s Law to include the long-ignored fourth dimension of efficiency. If we could take 20 percent of total semiconductor R&D and drive down total chip power consumption by 15 percent per year, we could have the average device running on just 20 percent of today’s electrical consumption by 2020. A personal computer would then use just 40 W, not the 200 W per hour consumed currently. That might sound challenging, but consider that today’s smartphones already consume less than 5 W for a full day of use.
Replace every chip in the world with these new low-power devices, and the global market would be able to carry five times as many electronic products as it does today—without adding any extra burden to the power grid.
What would we lose by diverting so much investment from power and price into efficiency? Almost nothing. The next generation of chips might take 18 months instead of 15; your next laptop might run at 2.4 GHz instead of 2.8 GHz; and the 30 percent price cut on the latest generation of iPod might happen next February instead of this November. Are you willing to make that small sacrifice? I think we all are.
What we need now is a new social contract, ratified by the entire chip industry, that agrees we will maintain the total energy consumption of the world’s semiconductor devices at the level it is today. Sound difficult? It is. But if you read about Moore’s Law in 1965 and were told that it would still be setting the pace in 2010, you would have thought it impossible.
The semiconductor industry is all about doing the impossible. Now is the moment for this generation of chip makers to take up the next challenge. Sehat Sutardja, is chairman, president and CEO of Marvell Technology Group.
To explain Energy to non-scientists you could try Cambridge University physics professor David Mackay, whose free PDF ebook is very readable and easy to grasp.(www.withouthotair.com).
My house uses 13-20kWh daily, the computing baseload is about 300W (8kWH/day) so ditching the server and desktops and going exclusively on-line would save 30-50% of our electricity bill.
Natch, I can't run those CAD and EDA tools on an iphone, but a laptop is fine and doesn't run all the time.
BTW its funny in a wry way to see the fat boys finally noticing and indeed courting the Cinderella of efficiency, after decades ignoring all common-sense protest in the name of market forces...
Could frugality ever be fashionable?
Quote: "Simply put, we have to initiate change and rethink Moore’s Law to include the long-ignored fourth dimension of efficiency."
AMD championed multi-core processing over 10 years ago, driven by power density issues, among other things. Intel reached the same barrier, as power and energy densities went higher than electric lamps, skillets, and rocket nozzles. While performance is still the main metric used to judge processors, reaching even higher performance was achievable only by spreading out the power consumption across the die, and capping the power density. Ultimately, this is MIPS per Watt.
ARM architecture is clearly showing high efficiencies, but Intel architecture remains the choice for high-performance. More performance means more functionality. People do low-power when their needs have been met first, or there is a physical or monetary constraint. In low-cost applications, ARM is already proving this.
The USERS of computing devices choose if they want performance or low power or low cost. If Marvell can deliver the best combination of all three, the world will beat a path to their door -- or so the saying goes...
Let's use Super DST (Daylight Savings Time) to turn off all the stadium lighting. Set our clocks ahead by twelve hours. We work at night where the office buildings use less HVAC energy and the same lighting energy as during the day. All sports would then be played during the day where no stadium lighting would be required. As a bonus, we would be in the same Time Zone as China making the semiconductor business even more efficient. A true win-win-win.
It will take the government to make it happen - where can that go wrong? :)
I wonder what is the end-limit of Moore's so called 4th D. power efficiency space where we use semi conduction abilty of such metals to act as switch which is basically energy consuming process. Perhaps we might be in need of switching to other technical solutions where we can control circuits with cold-electronics. Not?
@prbhakar_deosthali, I agree, there is lots of other way of saving the energy...but we need a mechanism to hae them implemented and incentives to do so...how exactly would you shut down night sport activities? government regulations? I think the only real mechanism is energy costs, let energy be more expensive and people will make some behavioral modifications...as for now energy is too cheap, my electricity bill is $40 a month, to little to bother saving! Kris
I think more energy can be saved by building Environment friendly buildings that use more natural light and ventilation than those Air conditioning plants and artificial lighting. More energy can be saved by reducing those night sports activities where millions of units are consumed for those flood-lit football, rugby and cricket stadiums, more energy can be saved by reducing the working hours of those heavily lit shopping malls . Compared to all these energy-guzzlers I think all that energy consumed that electronics will be a minuscule part of the world's energy consumption.
Moore's law survives because it benefits consumers while also benefiting producers through economies of scale. Efficiency as defined in this article as power consumption is not always a perceived benefit to consumers. It works in the context of mobile devices because it correlates directly to functional use. But to wall powered devices there is no perceived benefit because power is cheap and there is no convenience premium to be had for a lower power solution. While noble, until the market forces shift driven by the need to reduce the power of the plug-in computer, it's not going to go anywhere. Why would someone pay for an efficiency factor that does not benefit them?
An understanding of energy versus its rate of transfer/consumption is fundamental to properly characterizing the World's energy problems and engineering the right solutions. Without the right scientific/engineering foundation, we will keep making mistakes that waste our resources and create new problems rather than solve existing ones.
I think you can correlate Moore's law to other things that also increase exponentially, such as: Earth population, monetary debt, number of biological species(yes, evolution is exponential), communications/the internet, probably some things in economics, actually entropy itself is increasing exponentially because, probably, of a law of the universe or something far greater than human society anyway. We can barely understand things like global warming(which by the way doesn`t exist - it`s just sun activity) let alone control it. I agree with the OP in some way but I think we cannot control it, it`s just the way nature works.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.