Some of you may recall three years ago the huge debate that occurred within the then "Higher Speed Study Group" (HSSG) and whether or not to include 40 Gigabit Ethernet. As we now know, that drama played itself out and the HSSG decided to include the development of 40GbE and 10GbE in what became the IEEE Std. 802.3ba-2010 specification. This project has proven to be a critical point within Ethernet, as it redefined its legacy of only making 10x improvements in speed.
It appears, though, that the 100GbE family will continue to be a source of technological drama for the industry. I believe two efforts on the IEEE 802.3 horizon will prove to be a source of future industry debate, as well as good material for this blog. (Hey, these blogs just don't come to me, you know?)
The first effort is the development of 100GbE electrical operation across a backplane and copper cable assembly. Currently, IEEE Std. 802.3ba-2010 does define electrical operation across a copper cable assembly, but not across an electrical backplane. The 100GBASE-CR10 physical layer specification is based on the Backplane Ethernet specification (10GBASE-KR), which winds up needing 10 lanes of 10.3125 Gb/s in each direction. While this may work for some cabling applications, it just doesn't scale for backplane applications, where hundreds of Gb/s of slot capacity will be required. Many are looking at 4 lanes of 25 Gb/s to support 100GbE.
So what drama can we expect? There will be two areas—the channel and the signaling scheme. The 10GBASE-KR specification is based on NRZ and targets approximately 25dB of insertion loss at Nyquist, which is 5.15625 GHz. Using the appropriate board materials reaches up to 1m can be supported. The problem is that if one were to scale 10GBASE-KR to 25Gb/s operation using NRZ, the approximate 25dB provides neither the same reach as 10GBASE-KR (I have seen data that this would only support 27 inches) or the ability to use lower cost PCB materials that are typical in blade server applications. So, there will be a debate related to the reach and loss, which will influence the discussion on the signaling scheme. So it could be an interesting drama that unfolds. Perhaps NRZ's days as the signaling scheme of choice for backplane applications are over? It is not clear, but it would be premature to count NRZ out.
The second drama relates to 100GbE's support for data center reaches. The 100GBASE-SR10 specification supports 100m over OM3 MMF and 150m over OM4 MMF, and these reaches are proving to be too short for many application spaces. The 100GBASE-LR4 specification supports 10km, which is overkill for many application spaces, including cost-sensitive data center applications, resulting in the specification not being considered cost-optimized for the application needs at this time. I should state that I have had multiple discussions with individuals where they are looking for a cost optimized solution to support at least 2km. Some argue that that a 10 lambda WDM solution of 10.3125 Gb/s would be the better choice at this time. In this instance all appear to agree that a lower cost solution is necessary, but there is disagreement over the actual solution: WDM solutions based on 10 lambdas of 10G or 4 lambdas of 25G. The debate will point to the near term need of a cost effective solution for at least 2km as justification to go with the 10x10 solution , but others will note that the 4x25G solution will provide the better longer term solution in terms of power, cost, and port density.
I am actually looking forward to these dramas where individuals will make their cases on the floor of an IEEE 802.3 project. Everyone will need to bring their "A" games, as I expect these dramas to be quite intense, as there will be considerable scrutiny of the material presented. All in all, I am expecting good standards theater to watch. Any individuals interested in either of these efforts may reach me at firstname.lastname@example.org.
As with any new technology, you see power / cost improvements as the technology matures. One thing that should be pointed out is that there are multiple physical layer specifications for 100Gb/s, and there are different power spec's related to each of these.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.