Part I of this article explained the basics of 10GBase-T, its protocols, cabling options, and how electromagnetic interference is mitigated. Part II dives deeper into how a new generation of 10GBase-T technology is being deployed in, and revolutionizing, the data center.
When compared to other 10Gbps connectivity solutions, one of the most important advantages of 10GBase-T is its ability to communicate and interoperate with slower legacy Base-T systems. Most commercially available 10GBase-T transceivers are perfectly capable of reversion to both 1000Base-T (1Gbps) and 100Base-TX (100Mbps) protocols. In this way data centers can future proof their switching architectures. A 10GBase-T switch purchased today can communicate effectively with all legacy 1G and 100M servers, while providing the infrastructure to upgrade to 10G switching when commensurate speed servers are introduced. This also means that data center expenditures can grow incrementally. Rather than a wholesale conversion of all servers and switches to 10G speeds, which would be required with a non-compatible technology such as SFP+ Direct Attach, 10GBase-T switching systems can convert only those links that truly need upgrades to 10G speeds, while maintaining 1G speed on legacy servers that don?t require such data rates.
Unlike direct attach twin-ax cabling systems, which constrain full-performance-supported distances to 7 meters depending on cable thickness, 10GBase-T allows cable spans to reach to the full 100-meter length permitted by structured cabling rules. This extra reach affords data center managers the flexibility of locating switches away from server racks and opens up the data center to architectures that may be more amenable to accommodating legacy configurations that rely on more centralized switching. Heretofore, a lack of economical cabling options for 10G Ethernet beyond a single or adjacent rack has led to the popularity of top-of-rack (ToR) architectures, in which a stack of rack-mounted servers are connected with short cables to a fixed configuration switch in close proximity -- typically on top of the server rack. However, such an architecture has the drawback of increased management domains with each rack switch being a unique control plane instance that must be managed and updated.
A more centralized switching approach known as end-of-row (EoR) architecture, in which server ports are routed to a larger switch servicing several racks of servers, can have the benefit of a singular entity for management with commensurate reduction in maintenance costs. Moreover, because larger switches amortize the cost of common elements such as power supplies and cooling fans, the per-port cost of a larger EoR switch still may be lower than the equivalent number of ports in a collection of ToR switches.
An additional benefit of 10GBase-T is that of a uniform transmission media. The alternative in use today relies on a hodgepodge of cabling types, lengths and connectors: Cat6 for 1000Base-T, twin-ax with SFP+ connectors for short rungs of 10G, optical modules, and multimode fiber for longer runs of 10G. By standardizing on 10GBase-T, the data center manager can focus on only one cabling system for all speeds and all distances. And as luck would have it, that cabling system can be inexpensive Cat6A with familiar, cheap, and easily used and installed RJ45 connectors.
This, of course, leads to another great advantage of 10GBase-T technology: its ability to use ubiquitous and inexpensive cabling and, in many cases, the installed base of cabling that already exists in the data center in support of 1000Base-T systems. Even if a data center does not currently have Cat6 or Cat6A cabling as part of its existing cabling plant, the purchase price of UTP cable is at least three times less expensive than connectorzied twin-ax cable of the same length and up to 10 times less expensive than fiber solutions, when factoring in the necessary optical modules. Figure 3 summarizes some key advantages of 10GBase-T when compared to SFP+ DA cables.
Figure 3: 10GBase-T with Cat6A vs. Direct Attach Twin-Ax cabling
Since twisted-pair cabling and RJ45 connectors have been a part of the data center infrastructure for many years, techniques and tools for terminating (attaching connectors to) Cat6 and Cat6A cables on the data center floor exist and are in wide spread use. Such tools give mangers the flexibility of cutting spooled cable to needed lengths rather than ordering and keeping an inventory of pre-defined lengths of terminated cable, as would be the case for either optical or twin-ax counterparts. Another advantage of UTP cable is that, unlike optical fiber, which requires tight control of bend radius, deployment rules for bending and twisting are significantly relaxed, allowing easy installation in even the tightest places. Figure 4 shows the various types of twisted pair cabling available.
1. Category 5e, 5.6mm (.220?) dia. (for 1GBase-T)
2. Category 6 UTP, 6.5mm (.256?) dia.
3. Category 6 500 MHz U/FTP, 6.8mm (.268?) dia.
4. Category 6a U/FTP, 7.4mm (.292?) dia.
5. Category 6a UTP, 8.89mm (.350?) dia.
6. Category 6a UTP, 8.89mm (.350?) dia.
7. Category 6a UTP, 8.89mm (.350?) dia.
8. Category 7 S/FTP, 8.3mm (.327?) dia.
Figure 4: Various Ethernet twisted pair cabling types. 10GBase-T can operate with all types, but Cat6A is specifically designed to allow it to obtain a reach of 100 meters.
Power Saving Modes
One of the arguments against 10GBase-T has been power dissipation, though this is for the most part driven by looking at early implementations of the technology. Recent advances in semiconductor lithography have allowed 10GBase-T transceivers to enjoy a dramatic reduction in the power they dissipate during normal operation. From a per-port power of over 6W just a few years ago (interestingly enough, the same power per port that 1000Base-T initially shipped at), the new 40nm transceivers today are capable of sub 4W performance. And due to continuing shrinkage of chip feature sizes and the famous Moore?s Law, the 28nm devices that will become available in the 2012 time frame promise to bring power dissipation down further to about 2.5W per port when operating over a 100-meter line. For shorter length lines, most modern transceivers allow a tradeoff between power dissipation and reach. In a 30-meter mode, for example, a 28nm device?s power dissipation is expected to be in the 1.5W range. Figure 5 is depicts the power dissipation of 10GBase-T transceivers as semiconductor lithography has improved.
Figure 5: 10GBase-T Transceiver power per port. The reductions in per-port power demonstrated over three prior generations are expected to continue in future lithography generations.
In addition to the reductions afforded by advances in semiconductor technology, Base-T systems in general and 10GBase-T systems in particular are able to take advantage of some unique and standards-based algorithms exploiting the nature of computer traffic to further reduce power dissipation.
Wake-on-LAN (WoL) is a networking standard uniquely implemented on Base-T systems in which a network element, such as a server, is put to sleep until awakened by a special network signal called a ?magic packet.? The server?s network interface card (NIC) reverts to a very low power dissipation mode during the sleep period but remains alert and waiting for the magic packet. Once it arrives, the server is awakened and normal operation is resumed. Since the wakeup time associated with WoL is typically tens of seconds, it is designed for long periods in which servers are idle, such as at night or during other lengthy periods of inactivity. Even the most active of data centers experience periods when only a portion of its capacity is needed. This is a natural consequence of overbuilding resources to accommodate peak compute demands and the temporal and seasonal fluctuation in those demands due to non-uniform user locations and time schedules.
WoL can take advantage of these demand fluctuations with startling results ? putting to sleep even a single typical server with a power dissipation of 500W gains much more benefit than the difference in power of hundreds of transceiver devices. It should be emphasized again that optical or direct attach links are not designed to support the WoL protocol and, therefore, force the servers and switches to which they connect to stay on and dissipate their full power around the clock. 10GBase-T, in contrast, takes advantage of WoL and benefits the data center in overall reduced power needs.
While WoL is designed for lengthy idle periods, another technology called Energy Efficient Ethernet (EEE) is specifically designed to take advantage of the bursty nature of computer traffic. Typical Ethernet traffic contains many gaps, ranging in duration from microseconds to milliseconds, that to date have been filled with so-called ?idle patterns? in which no real computer information exchange takes place but whose waveform transitions can be used for maintaining clock synchronization between transceivers. EEE, developed by the IEEE 802.3az task force and issued as a completed standard in November 2010, defines an algorithm that exchanges those idle patterns for a Low Power Idle (LPI) mode where very little power is dissipated.
The LPI mode used during idle periods requires a new signaling scheme composed of alerts over the line, and to and from station management. During the LPI mode, a Refresh signal is used to keep receiver parameters -- such as timing lock, equalizer coefficients, and canceller coefficients -- current. These are also critical to enable fast transitions from LPI to Active modes. Typical transition times from Active to LPI mode and back are in the 3-microsecond range. The bottom line is that transceiver power savings utilizing the EEE algorithm can range between 50 percent to 90 percent, depending on actual data patterns. So to put all this information in quantitative terms, a 28nm 10GBase-T transceiver with a typical Active power dissipation of 1.5W for a 30-meter reach will dissipate only 750mW when utilizing the EEE algorithm with typical computer data patterns. System-level optimizations in switches and Ethernet controller silicon are expected to take advantage of EEE?s low-power idle signaling, and save far more power than the transceiver, since they can leverage the consumption of the entire switch or server, which is more than double the power per port of even the previous generation of transceivers.
In recognition of the many advantages afforded by 10GBase-T, several networking and computer manufacturers have been introducing compliant equipment, including fixed configuration switches, blades for chassis based switches, and NICs for servers. Examples of such equipment include:
- Arista Networks 7050T 48 port fixed configuration switches
- Cisco Catalyst 4900M Switch equipped with the WS-X4908-10G-RJ45 line cards
- Cisco Catalyst 6500 Switches equipped with 16 port 10GBase-T line cards
- Cisco Nexus 2000 fabric extenders with 10GBase-T interfaces
- Extreme Networks BlackDiamond 8800-10G8Xc switches
- Hewlett Packard E5400 and E8200 switches with 10GBase-T v2 zl Modules
- Emulex OCe11102-NT
- Intel X520-T2 10GBase-T Network Interface Card
- Silicom PE210G2i9-T 10GBase-T Network Interface Card
- Dell A1667528 10GBase-T Network Interface Card
Starting in the first quarter of 2012 a plethora of new switches, servers, and NICs that incorporate new 40nm 10GBase-T transceivers will be introduced. These will bring new price points and features as well as significantly reduced power dissipation and operating costs. More importantly, LAN-on-motherboard (LOM) chips are being developed that will allow server manufacturers to offer 10GBase-T as the default connectivity option. The implications of this development are quite profound, as they foretell servers with preconfigured Ethernet connections able to negotiate 100M, 1G, or 10G, depending on the capabilities of the link partner on the other end of the line. Data center designers and managers will want to be ready for such a development by deploying a 10GBase-T-capable switch, which can extract the full capability of the server it is connected to.
In this two-part article, we have examined the rising prominence of 10G Ethernet in the data center and explored the various options available for connectivity at these rates. We have focused on the 10GBase-T connectivity option and come to the conclusion that it is the most flexible, economical, backwards-compatible, and user-friendly 10G Ethernet connectivity option available. We have examined the basics of 10GBase-T transceiver operation and outlined the benefits of 10GBase-T technology -- namely the ability to interoperate with legacy slower technologies, the use of ubiquitous and inexpensive cabling and connectors, the flexibility of full structured wiring reach, the ease of Cat6A cabling deployment, and various power-saving options. We addressed the implications of 10GBase-T technology availability to data center architectures in the context of alternatives to the currently popular ToR switch placement. This information should empower those responsible for data center design and operation to prepare for the near future, in which 10GBase-T solutions will become prevalent.
About the Authors
Ron Cates is vice president of marketing, networking products, at PLX Technology, Sunnyvale, Calif. (www.plxtech.com) a leader in 10GBASE-T transceivers. He has more than 30 years of experience in the semiconductor industry and holds BSEE and MSEE degrees from the University of California at Los Angeles and an MBA from San Diego State University. He can be reached at firstname.lastname@example.org.
George Zimmerman is an independent consultant, specializing in physical layer communications technology. He is an acknowledged expert in wireline communications and has been a defining force in the development of 10GBASE-T, Energy Efficient Ethernet, and various DSL technologies. He holds a PhD. in electrical engineering from Caltech, and an undergraduate degree from Stanford University. He can be reached at email@example.com.