Increases in power demand, costs, require strategies & tactics to reduce consumption, promote environmental responsibility
Data-center electricity consumption is now approximately 2.5 percent of total energy use in the United States, according to a 2009 report from Lawrence Berkeley National Laboratory. It will continue to climb rapidly as the mobile Internet, cloud computing, and other technological trends mature. As of 2009, data-center energy consumption was rising at about 12 percent per year, according to a 2009 VMWare Corporation report. Total power costs in the US alone are now close to $3.4 billion annually. As a result, strategies to reduce power consumption, manage capacity, and promote environmental responsibility are critical objectives.
These strategies are vital as the number of servers in data centers grows by approximately 10 percent annually, according to a 2009 McKinsey and Company report. The new generations of servers are complex and potentially power-hungry. For example, the number of DC/DC regulators in a typical server now is huge, with 5- or 6-phase regulators used for the CPU Vcore. Altogether, they deliver up to 150A peak at 1V, or 150 watts per CPU. In addition, memory rails can dissipate between 25 to 120 watts of additional power. For other rails, dissipation is more modest, at a few hundred milliwatts to 5 watts each. But the total adds up fast.
The proliferation of servers from corporate and IP service providers to embedded applications such as wireless base-station network controllers or routers requires new, highly efficient power management techniques. One solution is powering-off excess servers in the data center, which can deliver immediate energy-cost savings by conserving power. Fewer systems clearly translate into less power and reduced operating costs. A native workload on an entry-level server with low utilization will consume 50W of energy costing around $600 annually, whereas a virtual machine workload on a server hosting 16 virtual machines uses only a fraction of that power: 5W, costing around $45 annually.
Server virtualization can help by reducing the use of hardware as the loading decreases. Another way to reduce consumption is to increase the light-load efficiency of the power chain, from AC-to-DC conversion to the point of load (POL). A typical server spends lots of time operating at loading points with low efficiency. Similarly, a typical personal computer operates at relatively low power utilization much of the time. Virtualization software improves efficiency, maximizing utilization in a server farm by ensuring that each server operates at peak MIPS rates.
Efficiency also can be improved by implementing digital core-controllers that companies such as Intersil (and others) are developing and enhancing. This kind of power-management technology has been developed for use in mobile and computing applications.
Consider improvements in light-load efficiency. In this case, the CPU core regulator is able to achieve over 90 percent efficiency from full load—which can be over 100 amps—to light load at close to 1 amp, or two orders of magnitude. In the data center, this type of high power load exists for core CPUs and dense memory in servers and also for custom ASICs that handle the network data traffic.
To help control the dissipation, companies such as Intersil are developing new multiphase and point-of-load architectures to Improve server DC/DC efficiency. Multi-phase regulators such as Intersil’s VR12 6 phase regulator are designed specifically to improve efficiency during light-load conditions using newly developed algorithms such as auto phase dropping, diode emulation mode and gate voltage over threshold can improve efficiency by as much as 20 percent at 10 percent load conditions—and even more as utilization drops. Efficiency can maintained over two orders of magnitude, from a few amps to nearly 100 amps.
There are other new, integrated power stages such as DrMOS that allow higher switching frequencies with less loss, due to lower Ronfigures and less parasitic-FET capacitance. For the other rails, newer regulators borrow techniques from portable systems, providing other means of improving efficiency, including switching from PWM to PFM, and integrating FETs for higher switching speed and density.
Considering the high-power CPU, memory and ASIC power rails—as well as the proliferation of other rails for field programmable gate arrays (FPGAs), auxiliary analog, I/O and standby circuits—the total benefit of these architectures can be significant.
Another impetus to improve efficiency is to add intelligence to the power chain. Digital power-management technology, in conjunction with virtualization, can help concentrate CPU activity to a subset of data center servers, so large numbers of idled servers can be reduced to low power states easily. Digital power also allows the monitoring of input and load current, voltage and power, with diagnostic functions like overvoltage/current and temperature.
This allows the data-center system controller to monitor the efficiency, and adjust based on real-time conditions. Digital power-management ICs, such as the Zilker Labs ZL2106, provide advanced algorithms that adapt the conversion to different load situations, and also communicate back information to the host. Digital power converters are being used in communications-infrastructure systems where high-performance conversion and management of power is critical.
With these kinds of products and technical capabilities, the challenge of reducing energy consumption and optimizing power use in data centers can be met, even while the number of data centers and servers per center continues to expand. ♦
About the author
Peter Oaklander is a Senior Vice President at Intersil Corp., Milpitas, CA.