BURNABY, B.C. ( ChipWire)-- PMC-Sierra Inc. here today announced it has developed a switch-fabric chip set for a networking architecture that the company claims can outdo the largest routers offered by Cisco Systems Inc. and Juniper Networks Inc.
PMC-Sierra claimed its Tiny Tera 1 (TT1) chip set is capable of an aggregate 320 gigabits per second of throughput. It relies on a company-developed interface that sits between line cards and the switch fabric. The company hopes to make this Line Card-to-Switch (LCS) interface an industry standard.
The practical threshold for existing mid-plane architectures -- which divide a system into active crossbars and line cards -- is roughly 200 Gbits/sec., said Anders Swahn, vice president of the carrier switching division at PMC-Sierra, based here. Getting faster speeds requires splitting the switch into multiple boxes that sit contiguously in the central office.
To get around the problem, PMC-Sierra is proposing a setup that cleanly splits functions between line cards and the switch fabric. This way, entire racks of line cards can be added to a network
without being next to the switch fabric, saving the trouble of shifting racks' positions inside a central office.
"The telecom switches have done exactly this since the 1970s," Swahn said. In implementations comprising more than 200 chips, PMC-Sierra's Tiny Tera 1 architecture will target not the wave of terabit-switch startups, but more established players. Some of those
customers are already "in the late stage of development" using the architecture, Swahn said.
At the heart of the architecture are 12 parallel crossbar chips coordinated by a lone scheduler chip. The architecture is designed to handle data in 64-byte chunks -- a number that works well with Internet Protocol (IP) traffic and is large enough to encompass a 53-byte ATM cell. Alternatively, TT1 can use 14 crossbars and handle 72 bytes of data at a time, Swahn said. Either setup stands in
contrast to a typical architecture that uses one large crossbar, he said.
For each of a system's 32 ports, six data flow chips build virtual queues to handle prioritization. The ports use a technique that allows priority traffic to slip past the less-important packets, thus avoiding the traffic-stopping problem known as head-of-line blocking. Packets are assigned to one of four service levels to determine quality-of-service.
The TT1 architecture also can handle plain best-effort service for ATM and IP networks, as well as Sonet TDM service classes, Swahn said. In the latter case, a specific amount of TDM bandwidth
can be reserved for a particular time; if the bandwidth happens to be unused at any given moment, it can be used for other traffic on a best-effort basis, so there's no penalty for over-reserving the network, Swahn said.
A fourth type of chip used in the TT1 architecture is the enhanced port processor. Every port has such a chip and uses it to terminate the Line Card-to-Switch (LCS) protocol, Swahn said.
The LCS protocol allows PMC-Sierra to create the clean break between the switch fabric and the line cards, pushing all queuing and prioritization onto the switch fabric, while the line cards see the
switch fabric as a "black box."
Part of the rationale behind LCS was to allow OEMs the ability to upgrade switch fabrics without affecting their line-card software, Swahn said. Likewise, systems vendors could concentrate on adding differentiating features to their line cards while pushing much of the complex work to PMC-Sierra's switch fabric. A number of tier-one customers are evaluating the solution, PMC-Sierra said.
If the switch fabric and the line cards are indeed kept in separate racks, then an extra step is required: an array of Gigabit Ethernet serializer/deserializers driving vertical-cavity surface-emitting
lasers (VCSELs) are needed to pump data onto a fiber-optic cable. These are all existing parts and can provide a reach of 250 feet, Swahn said.
Otherwise, the entire TT1 architecture could reside in a one-rack system along with line cards, Swahn said.
Running traffic in both directions at full capacity, the TT1 architecture can handle 320 Gbits/sec. of data, PMC officials said. In comparison, Cisco's largest router, the GSR12000, handles only 25
Gbit/sec. "in an apples-to-apples comparison," Swahn said. And the M160 router that Juniper Networks announced at the end of March --and trumpeted for outdoing Cisco's box -- handles 80 Gbits/sec.
TT1 actually evolved from a terabit-switch project at Stanford University that was the basis for startup Abrizio Inc., which was acquired by PMC-Sierra in August (see Aug. 24, 1999 story). Despite being a semiconductor firm, PMC-Sierra continued work on the system-level project, and has licensed the resulting reference design to a number of customers, Swahn said.
"Our design flow was basically that of a box company," he said.
The chips in question are all CMOS. Exotic materials such as silicon germanium or gallium arsenide wouldn't pay off here, Swahn said. "You get a one-time improvement of 2.5x, then you're riding Moore's Law," he said, but the bandwidth requirements for these chips is expected to continue to outpace Moore's Law.