In the article that I recently wrote for EE Times on power efficiency in IT data centers, I covered the figure of merit Power Usage Effectiveness (PUE) that's commonly used to measure efficiency. PUE is the ratio of the total power used in a data center facility divided by the power used by the IT equipment -- thereby capturing inefficiency in power supplies dissipated as heat and energy used to cool the facility. Many existing data centers have PUE ratings over 2 -- wasting more power than they use effectively. A PUE of 1.8 is generally considered good today and 1.4 is a goal for many data center managers. Google is way ahead of the curve, achieving a 1.3 PUE in their least efficient data center and a miserly 1.12 in their best case installation.
Google has an excellent series of web pages on data center energy efficiency. The pages cover everything from power supply efficiency in servers to facility cooling schemes. Indeed, it seems that Google has achieved such low PUE ratings through efficient cooling schemes. The company doesn't seem to have done anything exotic in power distribution or conversion.
In researching the EE Times article, I contacted Google and submitted a set of questions about power efficiency. Below you will find the results presented in Q and A form with Urs Hoelzle, Senior Vice President, Operations & Google Fellow.
Q: In terms of power distribution in the data center, has Google deployed architectures other than the legacy scheme that delivers 208VAC to server PSUs. For example has Google deployed DC power distribution such as the 380VDC scheme that some have proposed? Has Google deployed higher-voltage AC systems such as 600VAC systems that might cut distribution losses?
A: We've used several different designs in our datacenters. It's important to look at the forest and not the trees -- the overall efficiency of the power distribution system is determined by many factors, not just voltages. For example, shorter distances typically lead to better efficiency, and the UPS system usually is a bigger target. Overall we've tried to focus on finding the best way to integrate the various components, and to date we haven't used esoteric techniques such as 380VDC distribution.
Q: Has Google developed a roadmap for changes in power distribution architecture? Can you comment on the alternatives analyzed by the Green Grid organization?
A: The Green Grid study shows that there are many comparable implementations that are quite close to each other, and we agree that there probably isn't a single right solution. We agree that it is important to look at end-to-end efficiency (including server PSUs), as the Green Grid has done.
Q: Specifically does Google believe that a DC distribution scheme will succeed in data centers?
A: We don't have any experience with it and thus we can't comment.
Q: You said that the UPS is a bigger target. Is the main target simply improving UPS efficiency?
A: That statement was made in the context of the power distribution system, not all the possible areas for improvement in the data center. So, yes, the UPS is the bigger target when compared to AC versus DC, for example, but areas like cooling are also addressed.
Q: You also mention shorter distances. Is there an implication that the UPS should be integrated with some other part of the power system?
A: Shorter distances are important as you distribute power at lower voltages and higher current, however, this doesn't necessarily constrain the location of the UPS to any specific place. Following best-practice guidelines such as using high-efficiency transformers, high-efficiency UPS, and limiting high-current distribution distances can reduce losses while ensuring power continuity.
Q: I understand that Google relies on servers fed by a single 12V supply. Does Google design the embedded power supplies and regulators on the server boards? Do you use off-the-shelf board-mountable modules?
A: We design our power supplies and motherboards together with our partners, using generally available components. With careful engineering anyone else could do the same.
Q: Does Google rely on communications between the server and the power supply to manage and optimize efficiency?
A: The server is able to communicate to the PSU but this is used primarily for monitoring and not for active control of the conversion process.
Q: What are the roadblocks to better intelligence at each stage of the power architecture? Do we need to establish new industry standards?
A: We don't see significant roadblocks for better intelligence at the various levels of the power distribution infrastructure. Existing solutions offer excellent monitoring for almost any kind of control system implementation, and we use many such devices. Ultimately, efficiency and total cost of ownership are a product of end-to-end system optimization where monitoring devices are useful tools, though sometimes they are too expensive to be deployed everywhere.