SAN JOSE, Calif. – Engineers at HP Labs claim they have demonstrated software in a research setting that significantly lowers power consumption for large data centers. The company will test the tools in a full production data center at Hewlett-Packard and work on turning them into products that could ship with HP’s container-class systems.
The HP Net-Zero Energy data center techniques cut total power use by 30 percent and reduced use of the utility grid by nearly 80 percent in HP Labs tests. The techniques are implemented in a suite of Linux software tools that run on x86 servers and geared for data centers that use a local renewable energy source in addition to the public grid.
The HP software predicts computer and energy demand and supply. It then creates and implements a plan for scheduling workloads aiming to maximize use of the fewest possible servers for the shortest possible time, ideally when the facility can mainly draw on its local renewable energy sources.
HP Labs achieved its results at a 3,000 square foot test data center it operates at its corporate headquarters in Palo Alto, Calif. The center uses an 134 KW array of solar photovoltaic panels to supply part of its energy. Researchers aim to test the software soon in a 50,000 square foot HP production data center in Fort Collins, Colo.
The company also will test the software in a containerized set of data center systems at a Houston lab that is part of HP’s Moonshot program to test ARM and Intel Atom-based servers. If all goes well, HP could offer the mainly open source code within a year along with its containerized systems.
Power consumption is one of the largest costs of today’s largest data centers, mainly run by Web giants such as Amazon, Google, Facebook and Microsoft.
Mmm I cant see the average 2.0 user tolerating ANY kind of speed or convenience hit in actuality ;)
It sounds a worthy project - but maybe kinda fiddly to apply, for a real world data centre ?
Mind you when i saw the headline 'HP cuts data center power in lab tests' I thought the management hadn't paid the electricity bill.. "C'mon you guys, turn it back on !"
if can't wait "get it now" you energy hog!
If one is going to look at multiple topics/videos then the delay is not a big deal.
Needs buffering-storage on end user to capture for when user ready to view.
Static internet pay model needs tweaking to incentify the end user to take convience hit to save money along with providers to maximise energy savings possible
Hooks need to be at customer end to put tasks in bins, right away, can't wait, that utube video you want right now, or backup, intensive computation that takes hours can be scheduled at night just like backups when power is cheaper and can spread workload into non peak power times.
I bet you could double it by offering power saving patches and configurers for existing pc's and laptops in the field.
Another huge saving is to ask video and music streamers if they can wait a few until they can be added to stream list of end users so one stream can be farmed out to 10 to 10million at once. only the end nodes use the same amount of power. each branch up stream uses less and less power and resources freeing up server % headroom.
@goafrit: I think you have several separate concepts mixed up. First, there is very little obvious connection between a companies electrical efficiency (different than total consumption) and its number of employees; Second, I haven't read that any technical innovation is as responsible for the layoffs as is the general economy and lack of profitable sales. As a matter of fact, new technologies usually lead to new products which lead to new jobs. As you, I also feel sorry for those laid off. Perhaps the solution here is in the November ballot box.
@Rick: many details are missing and I suppose they weren't readily available. The energy efficiencies and the eventual proximity to a 'net-zero' status needs solutions at many levels -in the infrastructure that is intelligent, in the appliances (servers, switches, routers, storage) that implement energy efficient ethernet, airflow and cooling management, load balancing down to the rack level, etc.
@selinz: I hear you!
Google already implements many best practices for the march toward net-zero data center, take a look:
So on the surface it sounds like they want their people working and doing their computing at a time when they can benefit from solar power. So people can work day shifts... Do I need linux software to tell me that? Sorry, couldn't resist.
Running on local hydro, wind, and solar power can reduce the need for utility grid energy. Virtualization to minimize the use of unnecessary servers can reduce CPU energy consumption. These well known facts don't require new UNIX software on every server. Perhaps following current best software practices and developing local renewable energy sources (in a suitable location) to feed the grid (make money) would be the best way to reduce server farm energy costs.
I question the relevance of this kind of research. it seems to be based on the concept of a workload that can tolerate substantial delay in execution, and I simply can't think of why. this is not interesting for anything web-ish (where work is basically interactive and can't be put off.) it's not useful for HPC (where all machines are 100% busy anyway.) I suppose there are still some workloads where jobs simply need to run "sometime in the next day" - some kind of accounting stuff maybe, but isn't that sort of last-century?