Inside Facebook's Wedge: The green card holds a Broadcom switch chip, the red one an Atom-based x86 server SoC.
Facebook decided it couldn’t wait for companies like Arista to come out with new switches, so it will build its own. The Wedge switch (above), already being tested in production networks, will become a design Facebook will contribute to its Open Compute Project, an open-source hardware initiative.
“We wanted to get agility because we are changing our requirements in a three-month cycle,” far faster than vendors like Arista and Broadcom can field new products, said Yuval Bachar, a former Cisco engineering manager, now working at Facebook.
The company’s datacenters are approaching a million-server milestone, Bachar said. Today it uses 10 Gbit/s links from its top-of-rack servers, but it will need to upgrade in six to eight months, he said. The Wedge sports up to 32 40G ports.
The most interesting thing about Wedge is its use of a small server card, currently using an x86 SoC. However it could be replaced with an ARM SoC or “other programmable elements,” Bachar said.
Facebook's Wedge brings server and switch elements together in one box.
Facebook is developing a hardware abstraction layer to ride on top of the embedded server in Wedge so any third party could write applications for it. It will soon release most, but perhaps not all, the details of that API and the FBoss Linux variant the server will run.
Wedge marks Facebook’s first step toward a “future full integration of the server and networking in one element,” that may enable smarter decisions about what packets to forward where and when, according to Bachar. It's possible Facebook will use the design to create specialty servers and switches for dedicated jobs, he added.
We see a very strong trend in next-generating switches bringing in programming at a low level to experiment with protocols. That will help us optimize networks. We want to encourage the trend of more programmable chips we can modify, and I think all the silicon developers are going in that direction.
Our datacenter is extremely simple. We don’t have a lot of virtualization in there, only where we have special needs do we use programmable engines.
We are trying to create an environment where apps are not aware of the network. There are some special cases that are very network hungry, so we create a hybrid environment where apps understand the network and talk with a controller on top of the network.
However, he was quick to note most of the software for Facebook’s datacenters still runs on Intel x86 processors.
We believe CPUs today do not require specialty hardware -- we do a lot of load balancing and encryption/decryption in software. We don’t use any dedicated appliances. We’ve reached a point where the servers can do it.
Next page: Split personalities in the switch