Liebert already sells a version of the former product under the brand
name XDS. In addition, one server maker is testing an evaluation system,
and the Stanford Linear Accelerator Center will commission another system early next year.
approach let’s data centers keep systems in shipping containers,
eliminating the need to build the air-cooled facilities widely use
today. Hughes estimates that could save as much as $600 per server.
However, data center managers will have to pay as much as $1,000 per
server to install the Clustered Systems technology.
approach is one of a handful of liquid cooling options now circulating
in the high-end server market. Others used by the likes of Asetek,
Coolit, Eurotech and IBM employ a variety of waters, oils or other
ingredients sent to the rack or the server.
Hughes got his start
helping to develop the QuickRing technology at Apple, then trying to
create a video server based on it—but the key chip behind it failed. The
project morphed into the idea of a mesh for a big telco switch that
reaped $110 million in VC funding, but went belly up in the dot-com
Along the way, “I got addicted to startups,” said Hughes.
snagged $3 million in finding from President Obama’s economic stimulus.
It has financial runway for another year or so before it needs to see
commercial sales of its technology to stay afloat.
entrepreneur is pondering his road map. “I've concluded the technology
is capable of supporting a petaflop in a rack with today's product—I
think that the message is that cooling is removed as a barrier to
exaflop computing,” he said.
There are also the hobbyists who cool their motherboards with liquid nitrogen so they can clock them very fast. I think that a major issue with all liquid cooled systems is that the coolant requires more maintenance than the underlying computer. It also represents a common point of failure that can take everything else down. Air cooling may be crude - but it is relatively robust.
Before Cray Research used Fluorinert, he used Freon in two systems: CDC 6600/6400 and the CDC 7600.
And BTW: Could not get a similar cooling system working for what was to be the CDC 8600. That failure is why Cray left Control Data and formed
what would be really interesting is if someone (Intel) promoted a standard location for a cold plate on a standard 1U. vendors could arrange heatpipes inside the chassis however they liked, and the rack vendor would be responsible for circulating coolant through the plates that mated with the chassis plates...
modular, non-proprietary and not requiring a coolant hookup for each server.
Technology triumphs! Let's try this:
I have been continually surprised that a liquid-based cooling sysem has not been put in place for servers: much more efficient than cooling with air.
(I am also a little disappointed there is not opportunity to edit/delete a post when errors are made.)