Have we lost sight of power management when the indications are power consumption has jumped, cooling requirements for new installations are through the roof. There's hope with the implementation of some initiatives and new designs - but is it enough?
In the race to have the fastest computers some universities and businesses must have lost sight of power consumption and its cost. The University of Buffalo spent millions of dollars to install a super computer and elevate its status as a research institution, but they missed the part about how much electricity it would consume and how much extra heat it would create, and cooling capacity it would need. It may need to spend an additional $150k to get it cool enough. That's 7.5% of the purchase price just in cooling upgrades! The school is not alone looking to upgrade to newer, faster technology. Data centers are in a similar bind - needing more and faster servers to keep up with demand. It means more wattage needs. Research from the American Society of Heating Refrigeration and Air-Conditioning Engineers indicates that power consumption has leaped from 250W in 1992 to nearly 3,800W today, and that, of course, will cost more. Imagine what it costs for something like IBM's BlueGene/L, the fastest supercomputer in the world? This baby was made with low-power chips, takes up less space and uses less power than previous versions. But even with this improved, low-power chips it still takes 1.5MW to run.
All this reminds me of the automotive industry in the U.S. back in the 50s and 60s. Big and bigger were the engines du jour. Many of my brother's friends built their own super-charged engines, installing four-barrel carbs and the much more, just to increase horsepower. But no one worried about how these improvements would affect the gas consumption because gas was cheap. Not so anymore, and car manufacturers are again looking for ways to improve gas mileage and drivers are more aware of actions that will decrease their miles-per-gallon. Not enough on both counts but the trend is there.
The same is true with computers of all kinds - we are finally waking up to the blatant fact that we need to improve on power consumption because it costs too much to let the mega/giga hertz run wild. Are we doing enough, to curb our appetite for excessive heat producing electronics? I think we've taken some very important first steps by designing with power management in mind, with more efficient circuits as well as initiatives such as the Efficient Power Supplies organization, and the California Energy Commission's push to get electronic and electrical equipment to be more efficient. However, we need to be more diligent to prevent the runaway heat problems that individual computers, server farms, universities and supercomputer installations create when we only look to increase processing speed, at the expense of heat. What else do you think we can/should do?