As I’ve mentioned before (time and time and time again), we have developed an insatiable desire for ever-increasing bandwidth.
I can remember the days when the master clock on a circuit board was only a few hundred kilohertz. Actually, I can recall the first circuit board I worked on in the early 1980s when we pushed the clock up to 1 MHz (yes, that’s megahertz, NOT gigahertz). I remember us all standing around looking at each other wondering what was going to happen when the lead engineer powered the board up (it worked).
I can imagine this scene being played out in a film (black-and-white, of course), with all of the team members standing around in ill-fitting suits and ties, and one of the engineers saying (with a light Scottish accent): “Man was never intended to clock circuit boards at these speeds – this will end in tears, you mark my words.”
If you were in possession of a time machine (see also my blog on the best time travel story ever) and you returned to those far off days and told us about gigahertz clocks and data transfer rates measured in gigabits per second … we would have laughed our socks off.
And now look at us. I remember the first time I saw a transceiver that could transfer data at ~1 Gbps and thinking Wow! Then it was ~3 Gbps; then ~6 GBps; then ~12 Gbps; and now next-generation FPGAs are boasting ~28 Gbps.
“When will this madness end?” you cry. Well, no time soon, that’s for sure, but we are going to have to come up with different ways to move data around.
Let’s take a step back and think about what we’re going to want to do in the not-so-distant future. How about an entire high-definition movie downloaded to your TV or PC in less than a second? What about 3D video conferencing from mobile phones or holographic email messages or … we can let our imaginations run wild here. The problem is that our existing technologies are about to come to a grinding halt.
I’m more of a chip man myself, but I do have to say that today’s circuit boards are incredibly clever in their own right. Today’s PCB track widths are measured in a few thousandths of an inch, which is totally amazing to me. The problem is that the copper interconnect on circuit boards involves complex routing and exhibits lossy characteristics with limited reach.
The losses really start to increase with higher data rates (~3.5x from 10 Gbps to 30 Gbps) as illustrated in the following image. The only solution is to move to higher-cost PCB materials such as Megtron6, which results in a 5x increase in cost for equivalent losses at 30 Gbps.
But even with everything we can do, copper is ultimately destined to fail as data rates continue to increase. The obvious solution is to use optical interconnect, which offers nearly endless bandwidth (relatively speaking). Optical interconnect has lossless characteristics (at least, it does compared to copper) along with exceptional reach and scalability.
The next question would be how to implement optical interconnect at the chip and circuit board levels. Let’s take the board level first. As I discussed in the Alternative and Future Technologies
chapter in my book Bebop to the Boolean Boogie : An Unconventional Guide to Electronics
(with apologies for the shameless self-promition), one technique would be to use photo-imagable polyimide to fabricate optical waveguides directly into the interior layers of the circuit board (this would also require optical detectors and transmitters on the underside of the chip package). Although this may be of significant interest in the future, however, it would make circuit boards and chip packages too expensive today.
The alternative is to simply (the term “simply” is relative here) mount optical transceiver subsystems inside the chip package. For example, consider today’s announcement
by Altera, in which they talk about their plans to introduce FPGAs with GPIO on the bottom and optical I/O ports in the side (for more information on Altera’s optical developments, including a white paper on the topic, visit www.altera.com/optical
This technology would be suitable for chip-to-chip, card-to-card, and chip-to-backplane. It would also allow the use of low-cost PCBs. But what about the cost of the new optics-based chips / packages themselves?
Well, some of today’s existing applications – like communications backplane boards – already have optical devices (optical interconnect on one side, electrical on the other) connected to FPGAs, all mounted on the same circuit board. With Altera’s scheme, both the optics and the FPGA will be presented in a single package, which will weigh less and occupy less circuit board real estate. Using optics in this way offers tremendous advantages, including dramatically reducing system complexity, cost, and power, and also eliminating the signal integrity issues associated with copper-based solutions.
But what the folks at Altera are talking about goes far beyond this, because they are proposing using this technology in all sorts of applications including consumer products. Is this realistic? Well, according to Altera, this form of optics will soon be economically viable. As an example, they point to Thunderbolt technology from Apple and Intel (Click Here
to see Thunderbolt on Apple’s website and Click Here
to see it on Intel’s site).
Of course Thunderbolt – which can use both electrical and optical connections – is a different kettle of fish to what the guys and gals at Altera are talking about, not the least that Apple’s first Thunderbolt-enabled notebook is available now, while the stuff Altera is talking about may be three to five years out. The main thing here is that optical interconnect is coming to the desktop, which is going to open the floodgates.
My understanding is that each of the optical connectors that plug into Altera’s chips will contain a bunch of fibers (50, 100, 150, 200… who knows?), and that each of these fibers will start off carrying up to 100 Gbps of data (or more) … but maybe I’m just putting words into Altera’s metaphorical mouth here … I will leave it to the folks at Altera to comment further.
Another question relates to how the optical subsystem and FPGA die will be electrically connected inside the package. I’m sure that I have no idea what Altera’s plans here. I’m also sure that the folks from Altera won’t thank me for pointing out that Xilinx’s stacked silicon technology (SSI) using passive silicon-based interposers, microbumps, and through-silicon vias (TSVs) would be ideal for this sort of thing (see also Stacked and Loaded: Xilinx SSI, 28-Gbps I/O yield amazing FPGAs
When the folks at Xilinx first briefed me on their SSI technology way back in the mists of time, I immediately asked about the possibility of using it to combine one or more optical components along with multiple FPGA die, and they all started muttering about how nice the weather was and would I care for another beer.
So now we have the interesting position where the folks at Altera are publically talking about the fact that – at some time in the future – they intend to deliver devices combining optical interconnect and FPGA fabric in the same package … but they aren’t telling us exactly how they plan to do this.
On the other side of the fence we have the folks at Xilinx, who have announced that they are using stacked silicon interconnect technology to mount multiple FPGA die inside the same package … but who have not (publically, to the best of my knowledge) discussed the possibility of using this technology to combine optical interconnect and FPGA die in the same package.
What does the future hold? Who knows? I love this stuff!