Design engineers have a tough life. These days they are forced to work with diminished resources to turn out smaller, faster, better devices to meet ever-shrinking time-to-market deadlines. Moreover, the devices they are designing have become increasingly complex, resulting in a whole new array of challenges for both design and test. These challenges tax not only the traditional tools used for design and test, but the capabilities and patience of the engineer as well.
To better understand this dilemma, consider the example of a recent system development project I worked on. The system consisted of a memory controller integrated with a GPU and a south bridge (see figure 1). The main issue my team and I faced came from hot spots on the PCB ground plane that contributed to the collapse of the eye. The hot spots stemmed from congestion of the high-speed data signal surface currents from a memory channel running at 1.33 GB/s.
Figure 1: The system discussed in this article consisted of a memory controller integrated with GPU and south bridge.
Further complicating matters, we were trying to run the DDR3 memory system with two SODIMMs/channel, fully populated at 1.33 GB/s, while dealing with the cost, power, and lightspeed product-to-market cycle constraints resulting from the economic downturn. We wanted to reduce the cost everywhere on the memory channel, starting with the number of layers on the controller package and PCBs, as well as in terms of the bill-of-materials (BOM), but doing so on such a tight schedule was a risky undertaking.
From my 17 years of experience designing memory systems, I knew that when cutting corners to meet tight product-to-market cycles, designers often neglect to model the channel’s critical phenomena. One such phenomenon, return path discontinuity, causes eye collapse and loss of memory performance. To meet the cost target of the product, my team’s memory-controller package layout designer wanted to reduce the number of layers, as well as the stitching ground plated through holes (PTHs) that make the power planes look like Swiss cheese.
Of course, this brings me back to the hot spots I mentioned earlier. These hot spots are caused by the lack of stitching ground PTH when data signals move from the die bumps to the balls of the package while changing reference plane (see figure 2). The result is excessive crosstalk that would have forced us to down bin the system at 1.067 GB/s, or even 0.8 GB/s, instead of at 1.33 GB/s. For obvious reasons, neither option was acceptable to my team.
Figure 2: The lack of stitching ground PTH plays a contributing role in the occurrence of hot spots on a PCB.
The answer to resolving the issue lay in selecting an appropriate method for modeling the package and PCB RPD. The method we opted to use was the frequency-domain formulation for method of moments, coupled with full-wave, computational electromagnetic simulation. Method of moments is accurate from DC up to hundreds of gigahertz, depending on the meshing. To ensure this approach was correct, I correlated the results from a method of moments simulator using the vector network analyzer (VNA) measurements of a typical PCB (see figure 3).
Figure 3: VNA measurement of a typical PCB correlated with Agilent’s method of moments electromagnetic simulator, Momentum.