SANTA CLARA – A Google executive gave a rare peek inside the Web giant’s data center networks to show the OpenFlow standard it backs for software-defined networks is ready for commercial use.
Google is using OpenFlow on custom-designed hardware for all the internal networks it runs connecting its global data centers, said Urs Holzle, senior vice president of technology infrastructure at Google, speaking in a keynote at the second annual Open Networking Summit here.
OpenFlow is a technique for controlling network operations in software run on centralized computer servers saving cost, time and power. It aims to simplify and virtualize today’s business networks that currently require a number of specialized, distributed systems, each with its own software load.
If OpenFlow becomes widely adopted it could disrupt the fortunes of major router and switch makers such as Alcatel-Lucent and Cisco Systems as well as the ASICs and embedded processors they use.
Google has enable “centralized traffic engineering” on its network using OpenFlow. So far it has found it can run such functions “literally 25 to 50 times faster on a 32-core workstation,” Holzle said.
“It becomes easy to do things that are hard to do on embedded processors typically with little memory on a networking box,” he said. “You can use all the [computer] tools for normal software development, and that makes it faster to develop software that is higher in quality,” he added.
In 2009, Google started testing OpenFlow code from Stanford’s Clean Slate project before the software became an official standard. It now uses OpenFlow as the basis for its so-called G-Scale network that links its global data centers. G-Scale actually carries more traffic than a separate Google network that serves its external end users.
“I didn’t expect 18 months after we started tests we could really carry all our [G-Scale] production traffic” on OpenFlow, said Holzle.
The network is running on custom 10 Gbit/second switches with 128 ports Google built from standard merchant chip sets. Holzle did not detail the internals of the design.
Functionally, the Google OpenFlow switch “runs almost no software, just the OpenFlow agent” using just BGP and ISIS protocols, Holzle said. “We wanted to see how far we could go moving software off the box,” he said.
“The hardware is a side piece we had to do,” Holzle said. “I would love to be able to buy this, and I am confident I can get such systems this year or next,” he said.
In a separate conversation after the keynote, Holzle said Google does not expect to buy OpenFlow systems this year as it focuses on finishing the implementation of its current G-Scale network. However he opened the door to purchases in 2013 and beyond, probably looking for 40G systems supporting as many as 1,000 ports.
Google rolled out OpenFlow in 18 months across its global G-Scale network.
Software-defined networking is essentially virtualization come to the computer network. It promises advances in ease and cost of deploying and managing networks and could be a huge disruptor of today's leaders in comms systems, semis and software. OpenFlow is the first and so far most popular implementation of it.
In short, it's one of those often cited paradigm shifts.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.