Time travel! Forecasts! Different markets—different applications! The laws of physics being pushed to current known limits! (Funny how that "hope/change" mumbo jumbo doesn't apply here!).
OK, what the heck am i talking about?
Well to most this might sound like i am going off the deep end, but I swear i haven't! I am currently contemplating the IEEE 802.3bj 100 Gb/s Backplane and Copper Cable Task Force that I chair.
Alright! Flash back to the first paragraph and you are probably going "Huh?"
This project is really wrapped up in that first paragraph.
Switching and routing needs 100G now for their backplanes. Blade servers? Well probably not until 2017, but the backplanes that will need to be upgraded? Well they probably start shipping in 2014.
So what is the right choice?
Hey these things aren't easy but this is why we supposedly get paid the big bucks, right? Yeah right! Just let me call upon "The Force!"
Some are arguing that NRZ signaling will be the right solution, but others are arguing for the development of NRZ and PAM-4 solutions. It all comes down to the channel, THE statement of work! And that is the problem. The costs of today are not the costs of tomorrow. And no one has a crystal ball (cause if they did they would probably be off making a fortune in the stock market!).
In my former life I used to say "The application and its economics will dictate the right solution!" I would suggest that all those involved in this project take a step back and consider each other's perspective.
John D'Ambrosia is chief Ethernet evangelist, CTO office, at Dell.
Yes, would be nice to know the name of this guy. Would his initials be "B.U."? Did this start-up name begin with "X" by any chance?
I too live in Allen TX and have designed backplanes, but not for bit rates as high as you describe. Would be interesting to know the card connectors, and whether the diff pairs were edge or broadside coupled. Maybe this mystery guru could write an article...
Well, way back in the dim dark past (2000-2001, before the Great Telecom Meltdown of 2002), a start-up in Allen, TX that I worked for was building a really high-speed metro switch. The BP had a bunch of LVDS buses that were running @ 2.5Gb/s; however this was CO stuff, and had to be in-service upgradeable to 20 years! So our target was to be sure we could handle 10Gb/s error-free. The guy assigned to it was very conservative, and built us a BP with inner layers using exotic materials (GoreTex?). We didn't have any silicon to really test it with beyond the then-current 2.5 Gb/s, so we contacted a major semi manufacturer who we knew was working on some really far-out tech. We shipped them a BP along with some connectorized paddle boards. They checked it out for us with experimental semis @ 10Gb/s, NO ERRORS. Then for kicks they started running up the clock to the limit. Somewhere near 40Gb/s they started getting some errors! That BP turned out to be AWESOME, but overkill; as I recall we finally got it down to 14 layers, and a volume production cost (with connectors) over $3K! If anybody wants the name of the guy who did it, let me know....
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.