Editor's Note: This article is abstracted from a paper I created for the guys and girls at MagnaLynx, and it is presented here with their kind permission.
Do you recall some time ago when I penned an article titled How to turn every FPGA LVDS pair into a complete SERDES solution. This described an interesting technology called Align Locked Loops (ALLS) invented by a company called Align Engineering.
Well, shortly after that I received an email from Mitch Anderson from a hot new company called MagnaLynx saying:
Hi Max, I just read your article on ALLs, and I wondered if you might be interested in looking at our MagnaPHY serial interconnect technology. This is a very similar value-proposition (high-speed, low pin-count, etc.) but more oriented at high-speed embedded applications.
Now, I'm always interested in hearing about new, "hot-off-the-shelf" technologies, so I gave Mitch a call and we set up a meeting. Mitch and his colleagues flew down to see me (they are also amateur pilots), and what they told me was very exciting indeed . . .
Let's start with a brief overview, and then plunge down into some nitty-gritty details. First of all, we all know that high-speed serial interconnect is the connection mechanism of choice for today's state-of-the-art designs. There are numerous reasons for this, but the main ones as far as we're concerned here are high-speed, high-bandwidth, and – very importantly – low pin-count. As a simple example, consider a memory chip connected to an FPGA (Fig 1).
1. Using high-speed serial interconnect to link an FPGA and a RAM.
As we see, this ×1 (one-lane) version requires two pins on each device to form the differential pair that implements the transmit path from the FPGA to the RAM; similarly for the receive path to the FPGA from the RAM.
Now, there are a variety of well-known serial interconnect standards and protocols roaming wild and free in the world. For example, when you see the diagram above, your knee-jerk reaction might be: "Ah, we're talking about PCI Express, or RocketIO, or . . ." Not so!
Of course PCIe and RocketIO – along with other high-speed techniques such as 10 gigabit Ethernet (10GbE) – are very powerful, but they were originally conceived almost as full networking protocols for use in system-to-system and board-to-board scenarios. To put this another way, none of these standards was originally envisaged with chip-to-chip communications in mind.
The problem is that protocols like PCIe tend to try to be "all things to all men", with the result that there are substantial overheads associated with these little rascals. Now, a few hundred nanoseconds of latency is typically not critical when we're talking about system-to-system or board-to-board communications, but it can be something of a "knee-in-the-groin" when one is focused on chip-to-chip communications.
Something had to be done, which is where the folks at MagnaLynx leap onto the center of the stage with a fanfare of trumpets. Now, being engineers, they know that no one wants to re-invent the wheel (although, in my opinion, we might go for a more interesting shape next time). In the case of FPGAs, for example, they use the existing hard macros that are employed to implement any of the conventional high-speed SERDES protocols.
For example, the MagnaPHY serial interconnect technology from MagnaLynx uses 10-bit symbols similar to PCIe, but it doesn't use conventional 8b/10b encoding per se.
As an aside. . . One might think of the MagnaPHY encoding as 9b/10b, but this is something of a simplification. Another way to look at things is that if you consider 8-bytes of raw data, plus an additional 8-bits to implement ECC (Error Correcting Code), then you have eight 9-bit fields that would actually be transmitted as eight 10-bit fields. So when the folks at MagnaLynx talk about "Gigabyte Bandwidths", they are talking about working with "nine-bit-bytes" (if you see what I mean). But we digress . . .
Now, where was I? Oh yes. . . One crucial point about MagnaPHY is that it's easy to understand and to implement in your FPGA designs. For example, as illustrated in Fig 2, let's compare the sheer size of the specifications for PCI Express (left) and MagnaPHY (right).
2. Comparing the size of the specifications for PCI Express (left) and MagnaPHY (right).
Let's take a quick survey. Which of these specifications would you like to take home and learn this weekend? Put your hand up if you think PCI Express is the way to go. . . now let's see who thinks MagnaPHY might be just a tad easier to learn . . . well, I think the results speak for their selves.
All joking aside, the fact that the MagnaPHY specification is so much smaller actually does provide an indication as to the efficiency of this protocol. Once again, let's remind ourselves that MagnaPHY is focused only on chip-to-chip communications, which means it doesn't need all of the overheads required to implement a full board-to-board or system-to-system networking protocol.
The result is an extremely efficient, high-bandwidth, low (sub 10 nanosecond) latency protocol that can either enable high-end systems or provide for cost reductions in lower-end products. Furthermore, MagnaPHY provides a realistic path for the Terabit throughputs that we're all going to be demanding in the not-so-distant future.
In my conversations with Mitch and the other guys, they inundated me with technical details, such as the fact that MagnaPHY significantly reduces Bit Error Rates (BER). How? Well, Total Jitter = Deterministic Jitter + Random Jitter. It seems that conventional high-speed interconnects operate with BERs in the order of 10-12 (that's ten to the minus 12) based on +/– 7-sigma values on the distribution curve. Apparently MagnaPHY provides for significantly reduced deterministic jitter values, which allows it to tolerate higher random jitter values, which results in BERs in the order 10–20 to 10–21 (that's ten to the minus 20 to 21) based on +/– 10-sigma values on the distribution curve. I didn't understand a word of this, but it certainly sounded good (and I know – to my cost – that they would be delighted to talk to you about it in excruciating detail)!
Now, one consideration is that FPGAs contain programmable fabric, which allows the guys and gals at MagnaLynx to implement the "Secret Sauce" required to augment the FPGA's hard SERDES macros and implement MagnaPHY. In the case of other devices – such as RAMs – it's necessary to embed some small hard IP.
In the fullness of time, the folks at MagnaLynx would like to see MagnaPHY IP embedded in every chip under the sun. In the shorter term, however, they have created the MagnaLynx ML1S family of high-speed, single-port Static RAM's. In addition to providing a killer "proof-of-concept", these chips are ideal for Communications, Storage, Computing, Test-and-Measurement, and other applications requiring high-performance data buffering with maximum density and minimum power.
Now, I don't want to scare you, but I do want you to know that this is all real (besides, who among us can resist the lure of a real-world test-bench environment). Thus, Fig 3 shows one of these MagnaPHY-enabled memory devices hooked up to an FPGA development board.
3. A test-bench environment showing a MagnaPHY-enabled memory device hooked up to an FPGA development board.
Ah, how this takes me back ... to this morning in my workshop as fate would have it, but that's another story...