SAN JOSE, Calif. – Cisco Systems announced its first 40 Gbit/second Fibre Channel over Ethernet switch, a 96-port system packing two novel 40 nm ASICs. It also described plans for its first controller for software-defined networking, supporting both the OpenFlow standard and Cisco’s proprietary interface.
The new products show Cisco continues to invest in silicon and proprietary software to gain an edge over competitors, even as it embraces standard interfaces such as OpenFlow that aim to level the playing field.
Cisco’s Nexus 6000 will pack up to 384 10G or 96 40G Ethernet ports with a port-to-port latency of one microsecond. The company claims it has three times as many ports and a fraction of the latency of switches from Arista Networks, Juniper and others in high-end data center and service provider markets.
At the heart of the switch are two ASICs that form a three-stage Clos network. In an effort to lower the latency of the existing Nexus 5000, engineers took the radical approach of throwing out the scheduler block in the fabric ASIC.
“A packet goes across the fabric and sometimes collides with another packet and you sort out the collision--its going back to [the roots of] Ethernet,” said Peter Newman, a principal engineer who worked on the chips. “The trick to making this work is making sure there’s enough bandwidth in the fabric so there aren’t too many collisions,” he said.
Cisco packed into the fabric chip 192 input and 384 output serdes each running at 14 Gbits/s. They are linked on a crossbar switch with an arbiter but no buffer or storage.
“It’s not a lot of logic, just I/O,” said Newman.
The port chip is more complex, making up the first and last stages of the Clos network. It supports cut-through forwarding, can queue up to 320 Gbits of egress traffic, includes forwarding tables supporting layer-3 look ups and a large memory switch.
The switch is in some ways ahead of the market. Server chips supporting 40G Ethernet are just starting to ship, and none have yet arrived for the Fibre Channel over Ethernet variant that handles both networking and storage. Cisco’s server group has such a chip, but it is currently configured for 10G networks.
OpenFlow backers ultimately want to push all those networking jobs to x86 servers controlled by C++ programs to simplify network management and disrupt the big ASIC-based companies such as Cisco, AlcaLu, Juniper, Ericsson.
It will be a 5-10 year battle methinks.
I'm not sure I see a conflict between proprietary ASICs and Open Flow. The first is hardware, and the second is software.
High end Cisco routers confront a simple problem: an absolutely enormous amount of packets are being pushed through the network that must be processed and routed. As volume steadily increases, the question becomes "How do you do it fast enough to handle them all?" Cisco's answer is custom hardware designed for the purpose. Can it be done with off-the-shelf commodity hardware? I suspect Cisco might do it to lower their costs if they thought it could.
Ideally, the software level will abstract away the hardware, and if I'm a network engineer defining networks, I don't necessarily know or care what hardware is actually doing the work. I use the same commands and procedures regardless.
Rick: I don't think Cisco can have it both ways...they are supporting OpenFlow for PR reasons but clearly push their own technology...it is a loose-loose position they are in in...they need their own ASICs and proprietary system architecture to provide value added and prevent low cost box makers from copying their designs...but the world is going global with openflow and that tide will eventually crush them, they can't compete with Huawei on cost...Kris