SAN JOSE, Calif. – Engineers at HP Labs claim they have demonstrated software in a research setting that significantly lowers power consumption for large data centers. The company will test the tools in a full production data center at Hewlett-Packard and work on turning them into products that could ship with HP’s container-class systems.
The HP Net-Zero Energy data center techniques cut total power use by 30 percent and reduced use of the utility grid by nearly 80 percent in HP Labs tests. The techniques are implemented in a suite of Linux software tools that run on x86 servers and geared for data centers that use a local renewable energy source in addition to the public grid.
The HP software predicts computer and energy demand and supply. It then creates and implements a plan for scheduling workloads aiming to maximize use of the fewest possible servers for the shortest possible time, ideally when the facility can mainly draw on its local renewable energy sources.
HP Labs achieved its results at a 3,000 square foot test data center it operates at its corporate headquarters in Palo Alto, Calif. The center uses an 134 KW array of solar photovoltaic panels to supply part of its energy. Researchers aim to test the software soon in a 50,000 square foot HP production data center in Fort Collins, Colo.
The company also will test the software in a containerized set of data center systems at a Houston lab that is part of HP’s Moonshot program to test ARM and Intel Atom-based servers. If all goes well, HP could offer the mainly open source code within a year along with its containerized systems.
Power consumption is one of the largest costs of today’s largest data centers, mainly run by Web giants such as Amazon, Google, Facebook and Microsoft.
I question the relevance of this kind of research. it seems to be based on the concept of a workload that can tolerate substantial delay in execution, and I simply can't think of why. this is not interesting for anything web-ish (where work is basically interactive and can't be put off.) it's not useful for HPC (where all machines are 100% busy anyway.) I suppose there are still some workloads where jobs simply need to run "sometime in the next day" - some kind of accounting stuff maybe, but isn't that sort of last-century?
Running on local hydro, wind, and solar power can reduce the need for utility grid energy. Virtualization to minimize the use of unnecessary servers can reduce CPU energy consumption. These well known facts don't require new UNIX software on every server. Perhaps following current best software practices and developing local renewable energy sources (in a suitable location) to feed the grid (make money) would be the best way to reduce server farm energy costs.
So on the surface it sounds like they want their people working and doing their computing at a time when they can benefit from solar power. So people can work day shifts... Do I need linux software to tell me that? Sorry, couldn't resist.
@Rick: many details are missing and I suppose they weren't readily available. The energy efficiencies and the eventual proximity to a 'net-zero' status needs solutions at many levels -in the infrastructure that is intelligent, in the appliances (servers, switches, routers, storage) that implement energy efficient ethernet, airflow and cooling management, load balancing down to the rack level, etc.
@selinz: I hear you!
Google already implements many best practices for the march toward net-zero data center, take a look:
@goafrit: I think you have several separate concepts mixed up. First, there is very little obvious connection between a companies electrical efficiency (different than total consumption) and its number of employees; Second, I haven't read that any technical innovation is as responsible for the layoffs as is the general economy and lack of profitable sales. As a matter of fact, new technologies usually lead to new products which lead to new jobs. As you, I also feel sorry for those laid off. Perhaps the solution here is in the November ballot box.
Hooks need to be at customer end to put tasks in bins, right away, can't wait, that utube video you want right now, or backup, intensive computation that takes hours can be scheduled at night just like backups when power is cheaper and can spread workload into non peak power times.
I bet you could double it by offering power saving patches and configurers for existing pc's and laptops in the field.
Another huge saving is to ask video and music streamers if they can wait a few until they can be added to stream list of end users so one stream can be farmed out to 10 to 10million at once. only the end nodes use the same amount of power. each branch up stream uses less and less power and resources freeing up server % headroom.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.