Starting with the transatlantic telegraph cable, laid in 1866, telecommunications applications have always been the driver for the highest data rate systems. To feed this need for speed, the signal integrity performance of the interconnects has had to constantly improve. But there is a sea change occurring in the FPGA world that is changing this trend.
"If you look at the last few generations of systems, the bit rate has been increasing, from 3 Gbps to 6 to 10 and soon to 14 and then on to 28 Gbps, but the interconnect has stayed largely the same," Dr. Mike Peng Li, principal architect and distinguished engineer with Altera said in an interview recently.
In the last few generations of high speed serial links, what has overcome the limitations in the interconnects' performance and enabled the constant march to higher data rate has been the silicon. This trend is continuing in the next generation of FPGAs recently announced by Altera.
"We see internet backbone applications driving the need for speed," Salman Jiva, product marketing manger at Altera said recently. To implement 100 Gbps Ethernet optical modules, line cards typically multiplex 10 lanes, each running at 10 Gbps. But, Jiva says, this will evolve to 4 lanes at 25 Gbps followed by 10 or more lanes at 25 Gbps.
While Altera has been shipping FPGAs operating at 10 Gbps since 2008, they recently announced transceivers operating at up to 28 Gbps per channel. This means much of the multiplexing can be done in the FPGA and fewer components needed on the line cards to feed the data hungry optical modules.
But, it's more than just using the 28-nm high performance process technology from TMSC that enables successful 28 Gbps transceivers, Li, hastens to add.
In a recent interview, Li and Jiva mentioned four important new features coming to market in the latest generation of FPGAs, such as the Stratix V, that help to overcome the interconnect barrier posed by conventional circuit boards, and balance the cost-performance-power tradeoffs.
Data reliability is measured by the bit error rate. A typical product spec is to have no errors in the received data during the life of the product. If the lifetime is 5 years, and the data rate is 28 Gbps, the bit error rate (BER) must be less than 10-18.
Since you mentioned Nortel, I'm guessing you're Canadian and I think "bit error ratio" must be a Canadian thing. In 25 years of comms engineering in the U.S., I have always heard BER defined as "bit error rate."
True KB3001, FPGAs continue to eat into the ASIC market but the reason has to do mostly with cost of designing ASIC going thru the roof not with high speed links technology. Ten years ago I worked on ASICs with 2.5-10 Gb/s IOs while FPGAs at the time could deliver 1 Gb/s so there was a gap. But that gap still exists, FPGAs can do 10 Gb/s while highly specialized ASICs can do 100 Gb/s (the number above are per differential pair, you can always increase the bandwidth by going more parallel). One can of course argue that 10 Gb/s per 2 pins is sufficient so that gap is less relevant and FPGA is on par. ASIC development cost however used to be in single millions of dollars, is now is several millions of dollars so TAM required to justify the cost exists only in very small number of system level sockets. Hence everyone is using FPGAs unless it is a cell phone, PC or Ethernet switch...Kris
It's Bit Error RATIO, not Bit Error Rate.
Li is waving the pom poms again, clumsily using ex-CTO credentials to claim "putting a big instrument in a tiny box inside the chip", when Nortel was shipping this BER enhancing eye profiling technology in multi-gigabit chips in the early 1990's based on Tremblay et al's US patent 4,823,360 (1988), followed by NEC, Cisco, JDS, Vitesse, and others with patents in the same area. The eye asymmetries owing to the nature of fiber optics AFEs seems to also be a revelation for Li, but is day to day life for the customers he probably has never been in the lab with. [yawn]
@zeeglen - chip inductors have been used or on-chip oscillators for over two decades. In 28nm, they'll take up a huge amount of chip area in terms of transistor count. You are also FOS, and the kid was right, about seeing eye-closing phase hits that happen once in tens of minutes if not hours.
Eye patterns are useful at any data rate, but it takes familiarity and experience to interpret them. I remember a meeting where a young guy just out of school claimed that one could not simply glance at an eye pattern and classify it as good, acceptable, or bad. I had to set him straight that maybe kids fresh out of school could not, but those who have looked at eye patterns for 30 years certainly can.
Fascinating stuff to those of us who are not on the cutting edge. I remember eye patterns from 50 baud FSK systems and 9600 BPS Codex modems. (God, I sound like a real old fart....) As data rates got faster they seemed to be done away with. Nice to see them still being used at 10 GHz. I always reckoned they were immensely valuable, you could see from the pattern exactly what was wrong with your link.
Noticed something else same paragraph that is of interest. Does Altera need an external copper inductor or is the L part of the internal silicon?
The use of an LC tank in a VCO for PLL clock recovery is not new, was a conventional method long before the rickety voltage controlled ring oscillator was ever used. But if the L has been incorporated into the silicon then yes, that is relatively new. If so, might you have a link to Atera publications describing this technique in more detail?
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.