SAN JOSE, Calif. – Google opened a door—virtually—to eight of its global data centers on Thursday (Oct. 18), but the disclosures provided few new technical details.
The Internet giant posted a blog by Urs Hölzle, its senior vice president of technical infrastructure, pointing to an extensive photo gallery of the data centers by photographer Connie Zhou and a storyon Wired Magazine’s Web site based on exclusive access to the data centers and Google executives.
The blog also points to a Street View virtual tour of a Google data center. All the materials aim at a broad consumer audience, focusing on the vast scale of Google’s operations and its efforts to keep consumer's data private. The company is under scrutiny in Europe over its handling of private user data.
The disclosures are of general interest but break little new ground in the technology inside Google’s data centers, details the search giant keeps secret. The Wired story does note Google has gone through a dozen generations of its own proprietary server designs since Hölzle, founder Larry Page, and a third engineer designed the first Google server in 1999.
That first system cost $1,500, and was made from parts sourced at local Silicon Valley electronics shops, saving an estimated $3,500 on the cost of a similar off-the-shelf server. It is known that Google and other big data centers focus on stripped down designs to save on cost and power while improving reliability.
Exactly how this is accomplished remains a mystery.
Wired's story notes that Google does not yet design any of its own chips but remains open to that possibility. The story also provides two data points on the scale of Google’s server consumption: It installed its one-millionth server on July 9, 2008, and one data center in Lenoir, N.C, alone operates nearly 50,000 servers.
It’s hard to tell from one photo (see next page) and the two-minute video (above), but the current Google server appears to be a two-socket system that fills more than half a rack shelf. That would make it significantly larger than Facebook’s current design, a sled which takes up a third of a rack shelf.
The current Google server looks similar in size to the previous-generation Facebook server. However, unlike Facebook’s server, Google’s appears to use two half-size adapter cards, potentially a hit on cost, power and reliability.
Overall, Google’s gallery’s videos and virtual tours are carefully constrained in what they show. Glimpses of a networking room give no indication of Google’s work on switches or routers, though the company recently provided details of an OpenFlow system it designed from scratch.
While the multimedia presentation strives to show Google as opening up, it in fact remains highly secretive. It’s possible the current effort was sparked in part by Facebook’s move to throw open the doors to its latest server designs, which it is making openly available.
I think I may have glimpsed Charles Forbin in one of those scenes.
More than the specifics of the individual server design, I'd be interested to hear more about their total global computing and data handling power as well as the rate of growth of both of those details.
Join our online Radio Show on Friday 11th July starting at 2:00pm Eastern, when EETimes editor of all things fun and interesting, Max Maxfield, and embedded systems expert, Jack Ganssle, will debate as to just what is, and is not, and embedded system.