The latest generation of DDR memory, DDR4, doubles the speed of the current generation of DRAMs, DDR3, with end-of-life data rates of 3.2 GT/s. Compared to the first generation of DDR memory, which started out at 200 MT/s, DDR4 will be running almost 16 times faster. When DDR was first introduced thirteen years ago, the typical semiconductor feature size was 130 nm and a 2.5-V operating voltage was the standard. Now, devices promise to approach 14 nm feature sizes and 1 V operating voltages. The DRAM specs need to keep up. As DDR3 speeds approached 1600 MT/s and the DRAM data valid windows shrank from 800 ps at 600 mV to less than 60 ps at 270 mV, the DDR4 standards development team recognized the need to take a new approach.
To address the challenges posed by higher data rates, the DDR4 specification (JESD79-4) adopts several proven strategies from modern high-speed serial specifications like PCI Express and Super Speed USB. In fact, the DDR4 specification takes things one step further by providing a way for designers to allocate the system timing and noise budget between the controller, memory interconnect and the DRAM that is difficult, if not impossible to accomplish with other high-speed interfaces.
When DDR assumptions become risky
The traditional specification for DDR AC parametrics has relied on several assumptions that have been valid for many years but become increasingly risky in today’s high-speed systems:
The setup and hold timing assumption and random jitter
- Setup and hold times defined a clear boundary between reliable and unreliable data transfers. In other words, if an input signal met the specified Ts (setup) and Th (hold) timings, the data transfer would be 100% reliable.
- Random jitter makes up a negligible portion of the system and DRAM timing.
- Clock jitter only had to be controlled over a relatively short period, such as the time for the DRAM DLLs to lock, typically a couple hundred cycles.
- Random noise sources would be similarly negligible compared to the difference between the specified driver voltage swings and required receiver voltage swings.
It has always been a given in designing digital interfaces that if the required signal setup and hold times are met then data transfers will complete successfully. High-speed digital designers have known for years however that this is never literally true. Consider an example based on the JEDEC GDDR5 specification (JESD212). GDDR5 defines minimum pulse width (tDIPW) and data valid window width (tDIVW) for the signal transfer (see figure 1). Because GDDR5 allows the controller to discover the optimal position for each data bus (DQ) signal during bus training, the actual values of Ts and Th do not need to be defined, only their total. The timing of the DQ signal for the first data transfer (green) exactly meets the tDIVW requirements while the second transfer (red) is too narrow, violating the spec only by fractions of a femtosecond. A strict interpretation of these timings would predict that the first data transfer (green) will succeed 100% of the time, that is with zero errors. The second transfer could not be assumed to succeed at all. Can this really be what actually happens? Experienced designers know, in fact, that there is never a point where the error probability is truly zero. They also know that the transition from “nearly zero” errors to a higher error rate isn’t instantaneous but occurs over a range of timing values.
Click image to enlarge
Figure 1: The first signal (green) meets GDDR5 minimum pulse width (tDIPW) and data valid (tDIVW) specifications while the second signal (red) exceeds the tDIVW limit. In theory, this should result in successful data transfer for the first signal and 100% error for the second, but the reality is that neither result holds true.