Mmm I cant see the average 2.0 user tolerating ANY kind of speed or convenience hit in actuality ;)
It sounds a worthy project - but maybe kinda fiddly to apply, for a real world data centre ?
Mind you when i saw the headline 'HP cuts data center power in lab tests' I thought the management hadn't paid the electricity bill.. "C'mon you guys, turn it back on !"
if can't wait "get it now" you energy hog!
If one is going to look at multiple topics/videos then the delay is not a big deal.
Needs buffering-storage on end user to capture for when user ready to view.
Static internet pay model needs tweaking to incentify the end user to take convience hit to save money along with providers to maximise energy savings possible
Hooks need to be at customer end to put tasks in bins, right away, can't wait, that utube video you want right now, or backup, intensive computation that takes hours can be scheduled at night just like backups when power is cheaper and can spread workload into non peak power times.
I bet you could double it by offering power saving patches and configurers for existing pc's and laptops in the field.
Another huge saving is to ask video and music streamers if they can wait a few until they can be added to stream list of end users so one stream can be farmed out to 10 to 10million at once. only the end nodes use the same amount of power. each branch up stream uses less and less power and resources freeing up server % headroom.
@goafrit: I think you have several separate concepts mixed up. First, there is very little obvious connection between a companies electrical efficiency (different than total consumption) and its number of employees; Second, I haven't read that any technical innovation is as responsible for the layoffs as is the general economy and lack of profitable sales. As a matter of fact, new technologies usually lead to new products which lead to new jobs. As you, I also feel sorry for those laid off. Perhaps the solution here is in the November ballot box.
@Rick: many details are missing and I suppose they weren't readily available. The energy efficiencies and the eventual proximity to a 'net-zero' status needs solutions at many levels -in the infrastructure that is intelligent, in the appliances (servers, switches, routers, storage) that implement energy efficient ethernet, airflow and cooling management, load balancing down to the rack level, etc.
@selinz: I hear you!
Google already implements many best practices for the march toward net-zero data center, take a look:
So on the surface it sounds like they want their people working and doing their computing at a time when they can benefit from solar power. So people can work day shifts... Do I need linux software to tell me that? Sorry, couldn't resist.
Running on local hydro, wind, and solar power can reduce the need for utility grid energy. Virtualization to minimize the use of unnecessary servers can reduce CPU energy consumption. These well known facts don't require new UNIX software on every server. Perhaps following current best software practices and developing local renewable energy sources (in a suitable location) to feed the grid (make money) would be the best way to reduce server farm energy costs.
I question the relevance of this kind of research. it seems to be based on the concept of a workload that can tolerate substantial delay in execution, and I simply can't think of why. this is not interesting for anything web-ish (where work is basically interactive and can't be put off.) it's not useful for HPC (where all machines are 100% busy anyway.) I suppose there are still some workloads where jobs simply need to run "sometime in the next day" - some kind of accounting stuff maybe, but isn't that sort of last-century?
A Book For All Reasons Bernard Cole1 Comment Robert Oshana's recent book "Software Engineering for Embedded Systems (Newnes/Elsevier)," written and edited with Mark Kraeling, is a 'book for all reasons.' At almost 1,200 pages, it ...