Rick, I went on vacation astonished by the reactions to my comments. I tried posting a detailed response, showing IEEE 802.3 references to the OTN (aka SONET) compatibility goals, but for some reason it got rejected.
So I invite you simply to search under "otn carriage for 40 gb/s ethernet," in any search engine, and you will find numerous memos and presentations on the subject, dating back to 2007. I say 2007 because for some off reason, a 2012 presentation on this very topic was dismissed out of hand. (Most of these appear to be .pdf documents which don't provide a handy URL. But they come up on the first page of your search.) Oh, not to mention a 2006 presentation from the ITU-T that also mentions Ethenet carriage directly on top of SONET/STM.
Furthermore, you will also find IEEE presentations that explain exactly what I had said. Until 40 Gb/s, SONET was seen as an important goal. Beyond, not so much, as SONET is no longer seen as essential. Other OAM techniques have emerged that do not depend on SONET framing. One such presntation calls 100 Gb/s a "clean slate," or words to that effect.
Also, commercial success or lack thereof has no bearing on any of this. We are talking about initial motivations, not whether or not they were ultimately found to be important.
Finally, opposition to 4 or 40 anything, from within 802.3, should be obvious. It doesn't fit in the powers of 10 scheme Ethernet adopted early on. But the fact remains, the 40 Gb/s rate, even in 2007, was linked to OTN carriage. Otherwise, it would have made a whole lot more sense to pick either 25 or 50 Gb/s, as easy speeds to aggregate to the powers of 10 speeds that Ethernet always went by. In fact, name any Ethernet standard, other than the 40 Gb/s, that DOES NOT adopt 2.5, 25, or 250, or 10 of course, when it creates a multi-lane interface! I can name several that use these values.
Perhaps you have been playing this game for years, and you are entitled to your opinion, but as someone who was there, i can say unequivocally that your justifications below don't match up with the events of the time. As i noted, there was at times alignment, but this was not the justification for the decisions made.
For example, your questiona bout 40GbE is very easy to explain from a server perspective - when making a jump from 10G what were the logical choices - 20G? 40G? 50G? 20G was not the right choice at the time, because it was seen as only a 2x increase. so why not 50G as you suggested initially? As one individual put it - we prefer powers of 2 when coming up with interfaces - and another point when serdes vendors build these devices, they building single port, then dual port, then QUAD port.
BUt as you said you have been playing this game for years - so you know.
Let me leave you with this one last thought - the networking community fought tooth and nail against 40G until July of 2007 to include 40G in the project. So it is very easy for me to dismiss the justification you provided. Networking did not want it - they wanted all of the focus on 100GbE. This is a matter of record and i would suggest you go back and review the IEEE 802.3 Higher Speed Study Group records that are in complete contradiction with the reasoning that 40G was included because of SONET.
As i said the rates are different, and ultimately there was alignment and the opportunity to leverage, but i do disagree with your assertion that is why 10G was chosen.
Sorry, but I thought I said that alignment with SONET, at 10 Gb/s, had been a design goal from the beginning. The first OC-192 transceivers had been on the market since the 1990s, originally from Rockwell International, then sold to Alcatel. This was years before Ethernet reached that speed. The fact that OC-192 was already in place is what leveraged Ethernet into the metro area network game, using that speed.
I've been at this business for decades too, and I saw this coming. Because ATM, which depended on SONET, had lost favor by the mid 1990s. It lost favor primarily because packet-switching-friendly Internet Protocols had won the race, beating out ISDN/BISDN. And ATM, developed specifically for BISDN carriage, was not as efficient at carrying IP packets as Ethernet is. Too much cell overhead that went to waste with IP payloads (which IP payloads already have their own routing overhead - they don't need ATM's). So here we had these SONET pipes available, and why not just put Ethernet over them instead of ATM, to carry IP payload?
The same happened at 40 Gb/s. OC-768 transceivers, a hair less than 40 Gb/s net throughput, had been available since 2001, ready to go years before 40 Gb/s Ethernet was being developed. Again, it came as no surprise to me that Ethernet would repeat what it had done with OC-192, again for immediate carriage over metro area networks.
Next - your reference to support to the OTN compatability objective refers to a presentation that was done in Nov 2012.
And it's a source that unequivocally explains WHY 40 Gb/s was chosen. Specifically why 40, instead of some other rate like 50. The reason was, it was available immediately, for metro nets. Your references only discuss reasons for getting beyond 10 Gb/s, but they do not address the choice of 40 Gb/s as opposed to something more Ethernet-friendly. The number 40G existed out there already. Neither 25 nor 50 existed. So it was somewhat logical to base an Ethernet variant on 40G, even for PHYs that didn't involve the SONET WAN.
By the way i should point out i was the Chair of the IEEE P802.3ba Task Force that developed 40GbE and 100GbE.
That's okay. I noticed that. However it doesn't mean that our perspectives can't be different, while viewing the same scene. I've been playing at this game since 1983.
First - you lump Ethernet and SONET as 10Gb/s rates, but they are different. I didn't go to quoting presentations, but since you like these quotes, let's go there. Please refer to IEEE 802.3 Clause 49.1.2 objectives - "Support LAN PMDs operating at 10 Gb/s and WAN PMDs operating at SONET STS-192c/SDH VC-4-64c rate." As i said the rates are different, and ultimately there was alignment and the opportunity to leverage, but i do disagree with your assertion that is why 10G was chosen.
Next - your reference to support to the OTN compatability objective refers to a presentation that was done in Nov 2012. This was 6 years after the straw poll i mentioned, which was in Nov of 2006. You are pointing out a reference to living with the effects of 40GbE, not the influence on the decision to do 40G.
For your reference, the initial draft of the original 5 Criteria Responses that justified 40G and 100G can be found here - http://www.ieee802.org/3/hssg/public/july07/jaeger_01_0707.pdf. Please note how 40G broad market potential was justified -
Servers, high performance computing clusters, blade servers, storage area networks and network attached storage all currently make use of 1G and 10G Ethernet, with significant growth of 10G projected in '07 and '08. I/O bandwidth projections for server and computing applications indicate that there will be a significant market potential for a 40 Gb/s Ethernet interface.
HMMM - no mention of 40G for OTN
The following January after the project had been approved individuals drove consensus that 40G could be used for networking, and not just servers - see http://www.ieee802.org/3/ba/public/jan08/barbieri_01_0108.pdf.
I remember the day well, because i found out the day i had to put my golden retriever down!
Does this mean that Ethernet is not being used in these spaces. I never said that - i merely said this isn't what drove the decision to choose these rates.
By the way i should point out i was the Chair of the IEEE P802.3ba Task Force that developed 40GbE and 100GbE.
Ethernet adopted 10G because of SONET - I will agree that Ethernet leveraged SONET technology in developing 10GbE, however to say the rate was chosen because of SONET is clearly a stretch.
I think we only differ in the emphasis we place on the universe outside of IEEE 802.3, and how it influenced the IEEE 802.3 choices.
The rates 10 Gb/s and 40 Gb/s for Ethernet were chosen either entirely because of the SONET WAN existing physical layer, or were meant to exploit this existing physical layer, from the inception. I would agree that a 10 Gb/s rate for Ethernet would probably have been demanded sooner or later regardless of SONET (since the powers of ten speed increase sequence had existed from day 1 with Ethernet). But, had it not been for the fact that SONET 40 Gb/s pipes and transceivers already existed, I don't think it's a stretch to say that IEEE 802.3 would likely never have chosen the 40 Gb/s speed.
This presentation from 2002 makes the case for the 10 Gb/s variant:
By far, the most widely used networking technology in Wide Area Networks (WANs) is SONET/SDH. With the growth of Ethernet now into Metropolitan Area Networks (MANs) there is a growing need to interconnect Ethernet LANs and MANs to these prevalent SONET networks. With the advent of 10 Gigabit Ethernet, the commonality of line rates between OC-192 SONET and 10GE has opened up the opportunity to simplify the interface. This paper describes how 10GE nodes can easily interconnect with OC192 SONET networks without leaving the Ethernet cost model behind.
Previous generations of IEEE 802.3 Ethernet standards (i.e. 10 Mb/s, 100 Mb/s, 1 Gb/s) were not near the traditional interface bit rates of transport equipment in the wide area, such as DS-3 (45 Mb/s), OC-3 (155 Mb/s), OC-12 (622 Mb/s), and OC-48 (2.5 Gb/s). Out of necessity, an extra piece of equipment was required in the network to convert Ethernet rates (including protocol conversion) to those accepted by transport equipment.
10GE offered the potential for an Ethernet solution aligned with the 9.953 280 Gb/s rate of the OC-192 backbone. For the first time in the history of Ethernet, no additional speed matching equipment would be required to link with the WAN. A seamless, end-to-end Ethernet network could be built at lower network cost.
So, the existence of SONET OC-192 played a big part, I think it's fair to say, although again, that speed was logical for Ethernet too, beyond the 1000BASE speed.
You say, 40GbE had the most opposiiton. 40G emerged from the server vendors looking for an interim solution to a 10x leap to 100G servers.
And this begs the question, why did this server community choose 40 Gb/s, in their immediate search for speed, instead of say 50 Gb/s?
Both 802.3ba and 802.3bm have an objective to "provide appropriate support for OTN" (optical transport network). This is taken to mean an ability to transport 40Gb/s and 100Gb/s Ethernet links over a long-haul OTN link, using the mappings defined in ITU G.709
Fact is, G.709 had been around for decades! We find that the ITU originally published this in 1988, to describe the "synchronous multiplexing structure," i.e. SONET/STM pipes in major trunk lines. And the existence G.709, even if updated over the years, is the rationale for selecting 40 Gb/s for that faster Ethernet, as stated above.
Guys, i am going to have to disagree with some of the assertions in here.
Ethernet adopted 10G because of SONET - I will agree that Ethernet leveraged SONET technology in developing 10GbE, however to say the rate was chosen because of SONET is clearly a stretch. At that time Ethernet was enjoying the success of GbE, and its 10x the performance at 3x the cost mantra. Further, it didn't choose the SONET rate - this was the whole LAN / WAN fight.
Ethernet adopted 40G because of SONET - absolultely not! Initial discussions during the HSSG came from a networking perspective. If you go and look at the minutes from Nov 2007, you will see at that time - 40GbE had the most opposiiton. 40G emerged from the server vendors looking for an interim solution to a 10x leap to 100G servers. SONET came into the picture, when people said if Ethernet would do 40G it would need to fit into ODU4 payload, which the ITU took care of transcoding, once 40GbE was settled on.
WHile the similarity of rates is a convenient justification, i would argue that the server community was looking to leverage a mature technology (i.e. 10GbE) to come up with a next generation solution that would also be low cost.
The focus and emphasis on optical solutions is really missing the point of this effort. 25GbE is targeting the serial channel server interconnect. These are primarily copper interconnects (backplanes & copper cables feeding into switch chips that support up to 25Gb/s I/O) though some are arguing for MMF solutions as well. So to say that 100G serial optical solution(which is being debated in the 400GbE Task Force right now - again) means that 100G serial copper is imminent is frankly a stretch in my opinion.
There is building consensus on a 50Gb/s electrial solution, but understand that this is for chip-to-chip and chip-to-module interfaces, not copper twin-ax or backplane solutions.
As chair of the 400GbE Task Force the debate over 8x50 or 4x100 is far from over. There is also debate whether the electrical and optical solutions need to match up identically - in my opinion i would say that things can be optimal at that instance, but getting to that point frankly does take time. So we should be looking at these two instances, i.e. optical and electrical as independent but obviously with a degree of co-dependence.
So 25Gb/s signaling is becoming mature (server people go yay!) and matches into an optimal situation for networking (networking / data center people go yay!).
The debate for the rate after that - 40G or 50G is already raising a lot of interest, debate, etc. it is for this reason that the Ethernet Alliance (full disclosure - i am chairman of this alliance) is hosting a technology exploration forum on rate debates to start industry consensus building in this area.
And finally - for those saying 40G is the right choice, understand it is application and time driven. The CFI presentation clearly showed how standardized 40GbE today (based on 4x10G) is not optimal from a networking perspective (more switches, more cables, more capex, more opex) when considering a large cloud scale data center environment. And it is important to note that it is not just the MAC rate development that is important or even the development of the electrical signaling - it is when can these signaling options be pulled into a high density ASIC environment.
So given that wee do not have 40G or 50G serial solultions for chip-to-chip, chip-to-module, backplane, copper cable, MMF applications yet, and we assume a conservative 3 years standardization time frame, and 2 years for integration into high density chips, followed by time for design, qualification, and deployment into the industry - we are probably looking at another 5+ years before we see being at the same point technology wise that we are with 25GbE technology today.
That is the history and why the decision to do 25GbE makes sense in the industry.
Re 50G: I think some folks want a standard even if its just dual lane to have a follow on to their 25GbE products. I have heard of folks pushing the core serdes to 30+Gbits/s but I can't imagine how hard 50 might be and 100...hmmmm
Right or wrong, the 10 Gb/s and 40 Gb/s fiber links already existed for SONET, so their application to Ethernet was logical. Beyond 40 Gb/s is new territory.
I would expect that when the 100 Gb/s serial Ethernet link is developed, any idea of 50 Gb/s will become obsolete. Seems to me that the only reason to consider serial 50 Gb/s is that it should be easier and quicker to develop than the serial 100, and that's about it. Otherwise, what's the point?