John D'Ambrosia muses about the IEEE 802.3bj 100 Gb/s Backplane and Copper Cable Task Force he chairs.
Time travel! Forecasts! Different markets—different applications! The laws of physics being pushed to current known limits! (Funny how that "hope/change" mumbo jumbo doesn't apply here!).
OK, what the heck am i talking about?
Well to most this might sound like i am going off the deep end, but I swear i haven't! I am currently contemplating the IEEE 802.3bj 100 Gb/s Backplane and Copper Cable Task Force that I chair.
Alright! Flash back to the first paragraph and you are probably going "Huh?"
This project is really wrapped up in that first paragraph.
Switching and routing needs 100G now for their backplanes. Blade servers? Well probably not until 2017, but the backplanes that will need to be upgraded? Well they probably start shipping in 2014.
So what is the right choice?
Hey these things aren't easy but this is why we supposedly get paid the big bucks, right? Yeah right! Just let me call upon "The Force!"
Some are arguing that NRZ signaling will be the right solution, but others are arguing for the development of NRZ and PAM-4 solutions. It all comes down to the channel, THE statement of work! And that is the problem. The costs of today are not the costs of tomorrow. And no one has a crystal ball (cause if they did they would probably be off making a fortune in the stock market!).
In my former life I used to say "The application and its economics will dictate the right solution!" I would suggest that all those involved in this project take a step back and consider each other's perspective.
John D'Ambrosia is chief Ethernet evangelist, CTO office, at Dell.
Admire the task and the wisdome to jump in this blazing ride, I got a headache of just the thought of the 3D elecromagnetic modeling required. I don't know much about other mediums, but doesn't photo/laser drivers/receivers still need to do a medium exchange to electrical PHY, at a throuput penalty?
I forgot to mention protocol; we used SONET, since that's what the guys designing our ASICS knew best (and it's extremely robust). However, the testing was protocol-less, since there wasn't any HW (besides the experimental "super-LVDS" transceivers that could execute any protocol at those speeds.
I take my hat off to the specification gurus; if you get the protocol right, down to the PHY level, every else's job gets easier. Unterminated short-haul buses like PCI set performance/cost standards where SCSI and VME feared to tread, albeit at a cost of repeat prototype cycles.
I2C still trundles along just fine, but the newer serial buses have to be applauded for their reliability in the most cost-constrained applications. No exotic materials for them, but we all rely on HDMI, SATA and now Thunderbolt delivering unconscionably high data-rates. I often wonder at the contrast between their undoubted performance and the conservative much slower BPs of the past. Were they, in the final analysis, a tad over-engineered?
Clearly, much of the answer lies in the design of fast, advanced analog receivers, rather than in the wiring.
Biggest lessons: Impedance formulas were not consistent, even from the same companies! Etch is trapezoidal in shape, making difficulty in materials and calculations. There are no rules of thumb useable, just be anal. I rederived all the equations, ran it in a spreadsheet. Changing the dimensions found convergence in the accuracy/stability of the impedance match. special Drivers & Receivers, then you'll need that exotic material. Choose well. Many board houses have never cut their teeth on these materials. And maybe, for brevity, everyone, board house & every SW CAD package used must support, at least 1/10,000 of an inch detail, and with the best exotic material, without delamination. Then read an 18" stackup of documents covering this by experts. Best I can offer... until hired ;-)
Mark! Previous Boss, glad to hear from you. Not many bosses speak well of employees after 10 yrs.
I was conservative and driven. There was no minimizing the importance of the Backplane and the importance to the company that it perform to extreme throughput. It required special drivers & receivers, with additional performance features. The BP, I layed out the etch to 1/10,000 of an inch. The geometries converged to drop impedance errors to less than 5%. We used EO TDR equipment to scour the impedance match from card to BP to card. Highest glitches were the connectors. We worked on it together. I think the biggest help was the 3 months wait for Architecture to congeal, allowed me to read a stack of documents on how to design a High Speed BP, that stacked up about 18" off the floor in my office. You might recall the stack. Anyway, good to hear from you!!! I still have my notes in case you want to go solo on this venture. -BB