Well, way back in the dim dark past (2000-2001, before the Great Telecom Meltdown of 2002), a start-up in Allen, TX that I worked for was building a really high-speed metro switch. The BP had a bunch of LVDS buses that were running @ 2.5Gb/s; however this was CO stuff, and had to be in-service upgradeable to 20 years! So our target was to be sure we could handle 10Gb/s error-free. The guy assigned to it was very conservative, and built us a BP with inner layers using exotic materials (GoreTex?). We didn't have any silicon to really test it with beyond the then-current 2.5 Gb/s, so we contacted a major semi manufacturer who we knew was working on some really far-out tech. We shipped them a BP along with some connectorized paddle boards. They checked it out for us with experimental semis @ 10Gb/s, NO ERRORS. Then for kicks they started running up the clock to the limit. Somewhere near 40Gb/s they started getting some errors! That BP turned out to be AWESOME, but overkill; as I recall we finally got it down to 14 layers, and a volume production cost (with connectors) over $3K! If anybody wants the name of the guy who did it, let me know....
Yes, would be nice to know the name of this guy. Would his initials be "B.U."? Did this start-up name begin with "X" by any chance?
I too live in Allen TX and have designed backplanes, but not for bit rates as high as you describe. Would be interesting to know the card connectors, and whether the diff pairs were edge or broadside coupled. Maybe this mystery guru could write an article...
Mark! Previous Boss, glad to hear from you. Not many bosses speak well of employees after 10 yrs.
I was conservative and driven. There was no minimizing the importance of the Backplane and the importance to the company that it perform to extreme throughput. It required special drivers & receivers, with additional performance features. The BP, I layed out the etch to 1/10,000 of an inch. The geometries converged to drop impedance errors to less than 5%. We used EO TDR equipment to scour the impedance match from card to BP to card. Highest glitches were the connectors. We worked on it together. I think the biggest help was the 3 months wait for Architecture to congeal, allowed me to read a stack of documents on how to design a High Speed BP, that stacked up about 18" off the floor in my office. You might recall the stack. Anyway, good to hear from you!!! I still have my notes in case you want to go solo on this venture. -BB
Biggest lessons: Impedance formulas were not consistent, even from the same companies! Etch is trapezoidal in shape, making difficulty in materials and calculations. There are no rules of thumb useable, just be anal. I rederived all the equations, ran it in a spreadsheet. Changing the dimensions found convergence in the accuracy/stability of the impedance match. special Drivers & Receivers, then you'll need that exotic material. Choose well. Many board houses have never cut their teeth on these materials. And maybe, for brevity, everyone, board house & every SW CAD package used must support, at least 1/10,000 of an inch detail, and with the best exotic material, without delamination. Then read an 18" stackup of documents covering this by experts. Best I can offer... until hired ;-)
I take my hat off to the specification gurus; if you get the protocol right, down to the PHY level, every else's job gets easier. Unterminated short-haul buses like PCI set performance/cost standards where SCSI and VME feared to tread, albeit at a cost of repeat prototype cycles.
I2C still trundles along just fine, but the newer serial buses have to be applauded for their reliability in the most cost-constrained applications. No exotic materials for them, but we all rely on HDMI, SATA and now Thunderbolt delivering unconscionably high data-rates. I often wonder at the contrast between their undoubted performance and the conservative much slower BPs of the past. Were they, in the final analysis, a tad over-engineered?
Clearly, much of the answer lies in the design of fast, advanced analog receivers, rather than in the wiring.
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.