Now, using the DDR4 specification, you can allocate the jitter and noise budget for your memory controller, bus and DRAM in a way that was not possible before. You can define the design goals to optimize cost, time to market and MBTF without having to overdesign or screen parts to an unnecessarily tight degree. In addition, because the random jitter and noise components have a Gaussian distribution, they are added using a root-sum-squared method before computing total jitter and noise rather than computing the total jitter and noise of the controller, bus and DRAM separately and then adding them.
This approach can make a big difference in the amount of margin you have to put into your design. For example, let’s assume that the amount of random jitter for your controller is 4 ps RMS and the memory bus adds 3 ps RMS to that (due to crosstalk or other effects). At a 10-16 BER, the “Q-factor” (ratio between RMS jitter/noise and total jitter noise) is 8.2. The total jitter of the controller would be 8.2 * 4 ps = 32.8 ps. The total jitter of the interconnect would be 3 * 8.2 = 24.7 ps, giving a grand total of 57.5 ps of the jitter budget consumed by the controller and interconnect. The DDR4 spec allows you to assume these random components have a Gaussian distribution, however, so the total jitter is computed by first taking the RSS of the RMS jitters and only then converting to total jitter.
The rms jitters of the controller and interconnect combine to produce 5 ps of RMS jitter (sqrt(4ps2 + 3 ps2) = 5 ps). Multiplying this by the Q-factor gives 5 * 8.2 = 41.1 ps of total jitter. That is significantly less than the 57.5 ps total jitter value you had to design to before. The difference of 16.4 ps may not seem like much, but for a DDR4/2400 design this amounts to 20% of the entire data valid window! Your design no longer has to make up this 20% margin that would have been required by the traditional specification method. This “phantom” margin difference also applies to production screening in which the device under test and test fixture jitter (and noise) contributions must be accounted for. Screening for a realistic amount of margin rather than an unnecessarily large amount could significantly improve yields and therefore reduce cost.
Adjusting the DDR4 Mask for a different BER
The DDR4 spec defines the data-valid window for a BER of 10-16. What if you want to improve the reliability to a smaller BER such as 10-18 (a failure every two weeks) or 10-20 (a failure every four years) however? How much would you have to open the data eye at the DRAM to achieve this? Figure 3 shows a DRAM mask at the “standard” BER of 10-16. (To help with this explanation the size of the mask itself is reduced.)
Figure 3: The DDR4 specification defines a “standard” eye mask for a BER of 10-16. (Mask graphic is reduced in size for explanatory purposes.
To compute the values of TdiVW_total and VdiVW_total when operating the DRAM at a lower error rate, you would first compute the total random jitter/noise component of the DRAM spec. This is simply equal to the difference between the total (TdiVW_total and VdiVW_total) and deterministic (TdiVW_dJ and VdiVW_dV) components given in the AC parameters table shown in part 1
. Dividing this by the Q-factor for a 10-16
BER (8.2) gives the RMS random noise and jitter instead of the total jitter/noise. You would then multiply this RMS value by the Q-factor for the BER at which you want to operate. For example, the Q-factor for a 10-20
BER is 9.2. Multiplying the RMS jitter/noise values by 9.2 would give the size of the eye the DRAM requires to maintain the new BER. The new mask is shown in figure 4. The deterministic component stays the same as it was for the BER of 10-16
. The random component, however, is larger, meaning that a more open eye is required to run at a lower error rate. This is exactly what would be expected.
Figure 4: Reduced error rate mask showing increased random component.