John D'Ambrosia muses about the IEEE 802.3bj 100 Gb/s Backplane and Copper Cable Task Force he chairs.
Time travel! Forecasts! Different markets—different applications! The laws of physics being pushed to current known limits! (Funny how that "hope/change" mumbo jumbo doesn't apply here!).
OK, what the heck am i talking about?
Well to most this might sound like i am going off the deep end, but I swear i haven't! I am currently contemplating the IEEE 802.3bj 100 Gb/s Backplane and Copper Cable Task Force that I chair.
Alright! Flash back to the first paragraph and you are probably going "Huh?"
This project is really wrapped up in that first paragraph.
Switching and routing needs 100G now for their backplanes. Blade servers? Well probably not until 2017, but the backplanes that will need to be upgraded? Well they probably start shipping in 2014.
So what is the right choice?
Hey these things aren't easy but this is why we supposedly get paid the big bucks, right? Yeah right! Just let me call upon "The Force!"
Some are arguing that NRZ signaling will be the right solution, but others are arguing for the development of NRZ and PAM-4 solutions. It all comes down to the channel, THE statement of work! And that is the problem. The costs of today are not the costs of tomorrow. And no one has a crystal ball (cause if they did they would probably be off making a fortune in the stock market!).
In my former life I used to say "The application and its economics will dictate the right solution!" I would suggest that all those involved in this project take a step back and consider each other's perspective.
John D'Ambrosia is chief Ethernet evangelist, CTO office, at Dell.
Well, way back in the dim dark past (2000-2001, before the Great Telecom Meltdown of 2002), a start-up in Allen, TX that I worked for was building a really high-speed metro switch. The BP had a bunch of LVDS buses that were running @ 2.5Gb/s; however this was CO stuff, and had to be in-service upgradeable to 20 years! So our target was to be sure we could handle 10Gb/s error-free. The guy assigned to it was very conservative, and built us a BP with inner layers using exotic materials (GoreTex?). We didn't have any silicon to really test it with beyond the then-current 2.5 Gb/s, so we contacted a major semi manufacturer who we knew was working on some really far-out tech. We shipped them a BP along with some connectorized paddle boards. They checked it out for us with experimental semis @ 10Gb/s, NO ERRORS. Then for kicks they started running up the clock to the limit. Somewhere near 40Gb/s they started getting some errors! That BP turned out to be AWESOME, but overkill; as I recall we finally got it down to 14 layers, and a volume production cost (with connectors) over $3K! If anybody wants the name of the guy who did it, let me know....
Yes, would be nice to know the name of this guy. Would his initials be "B.U."? Did this start-up name begin with "X" by any chance?
I too live in Allen TX and have designed backplanes, but not for bit rates as high as you describe. Would be interesting to know the card connectors, and whether the diff pairs were edge or broadside coupled. Maybe this mystery guru could write an article...
Mark! Previous Boss, glad to hear from you. Not many bosses speak well of employees after 10 yrs.
I was conservative and driven. There was no minimizing the importance of the Backplane and the importance to the company that it perform to extreme throughput. It required special drivers & receivers, with additional performance features. The BP, I layed out the etch to 1/10,000 of an inch. The geometries converged to drop impedance errors to less than 5%. We used EO TDR equipment to scour the impedance match from card to BP to card. Highest glitches were the connectors. We worked on it together. I think the biggest help was the 3 months wait for Architecture to congeal, allowed me to read a stack of documents on how to design a High Speed BP, that stacked up about 18" off the floor in my office. You might recall the stack. Anyway, good to hear from you!!! I still have my notes in case you want to go solo on this venture. -BB
Biggest lessons: Impedance formulas were not consistent, even from the same companies! Etch is trapezoidal in shape, making difficulty in materials and calculations. There are no rules of thumb useable, just be anal. I rederived all the equations, ran it in a spreadsheet. Changing the dimensions found convergence in the accuracy/stability of the impedance match. special Drivers & Receivers, then you'll need that exotic material. Choose well. Many board houses have never cut their teeth on these materials. And maybe, for brevity, everyone, board house & every SW CAD package used must support, at least 1/10,000 of an inch detail, and with the best exotic material, without delamination. Then read an 18" stackup of documents covering this by experts. Best I can offer... until hired ;-)
I take my hat off to the specification gurus; if you get the protocol right, down to the PHY level, every else's job gets easier. Unterminated short-haul buses like PCI set performance/cost standards where SCSI and VME feared to tread, albeit at a cost of repeat prototype cycles.
I2C still trundles along just fine, but the newer serial buses have to be applauded for their reliability in the most cost-constrained applications. No exotic materials for them, but we all rely on HDMI, SATA and now Thunderbolt delivering unconscionably high data-rates. I often wonder at the contrast between their undoubted performance and the conservative much slower BPs of the past. Were they, in the final analysis, a tad over-engineered?
Clearly, much of the answer lies in the design of fast, advanced analog receivers, rather than in the wiring.