New ultra-slim notebook PCs such as the Ultrabook are changing the way memory is used in portable clients. Because these machines cannot accept small-outline dual in-line memory modules (SODIMMs), manufacturers have found themselves forced to solder memory onto the motherboard directly. Severe limitations on PCB area have driven the use of high-density interconnect (HDI) technology for first-generation machines, but board real estate and cost remain issues.
A multi-die DRAM packaging technology called DIMM-in-a-package solves these problems by providing the memory functionality of an SODIMM in a single package. Using the new approach, an existing Ultrabook notebook design was converted to a simpler non-HDI implementation saving over $15 in PCB costs per system while reducing PCB prototyping time from three weeks to five days. Assuming typical PC product volumes, this represents several million dollars per year of
Initial implementations use face-down wirebonded DDR3/DDR4 memory for the lowest cost segment, but the same ballout supports multi-die LPDDR3 and GDDR5 configurations. This allows OEMs/ODMs to use a common PCB design to build a range of differentiated products, each using various memory technologies such as DDR3, DDR4, GDDR5 and LPDDR3. The change simplifies manufacturing logistics for the OEM, but the common ballout brings value to the semiconductor manufacturer, too, allowing them to share test infrastructure among these device types. This consolidation represents millions of dollars in potential savings in manufacturing tooling investment on the memory test floor.
The standard single-die package
Besides completely changing the logistics of DRAM inventory management by eliminating pluggable memory modules that can be installed just prior to shipping to the customer, soldered-down DRAM presents new challenges in motherboard design. The greatest challenge is minimizing the area occupied by memory components while simultaneously permitting high-speed operation (> 1333MT/s) on a low-cost PCB design. At the heart of the issue is the difficulty of routing onto a PCB layout the industry standard 78-ball BGA single-die package used by nearly all of these system motherboard designs.
Despite the fact that manufacturers annually ship billions of single-die DRAM packages (SDPs) on double-sided DIMMs, motherboard requirements for area minimization, re-workability, and limits on the lengths of routed signal traces make these memory components more difficult to use on low cost standard-process PCBs that feature through-board vias. A common manufacturing-driven design rule requires approximately 3 mm of free space surrounding any BGA on a motherboard to permit rework, for example. An 8-mm-wide memory package would require an 11-mm pitch on a motherboard versus an 8.5-mm pitch if used on a DIMM. When 16 such packages are used to construct the required dual 64-bit channel memory subsystem, substantially more PCB area is required than might be expected based on the size of the package alone.
Another difficulty is interconnecting devices placed in “clamshell” configuration on opposite sides of the PCB. Because the ball assignment of the industry-standard BGA DRAM package has no symmetry, the breakout region is filled with vias and traces used to cross tie the signals to be shared. Effective PCB routing maintains both a minimum and maximum length for each trace with dense package placement, which requires the blind via (layer to layer) technology enabled by HDI processing. A key layout benefit of blind vias is the fact that signals can be routed across a via without making contact by simply using a different routing layer. In contrast, through-board vias effectively block routing channels where they are placed.
The HDI manufacturing process combines a conventional multi-layer PCB core with a number of build-up layers on each side of the core. To make an HDI PCB, a layer of dielectric is laminated to one side of a normal-process multi-layer PCB containing through-board vias. The dielectric is laser-drilled and plated with copper, which is then photo-etched. The process repeated as many times as necessary on each side of the board to reach the final layer count. A 3-6-3 HDI PCB, for example, will have a six-layer core with three such build up layers added to each side (see figure 1).
Figure 1: Cross section of HDI versus standard process PCBs.
It’s a complex, multistep, and expensive process. Meanwhile, cost reduction of the Ultrabook platform is a high priority for manufacturers. Given volumes on the order of millions per year and low margins, aggressive cost reduction can make the difference between a profitable product and a money-loser. Eliminating the need for HDI PCBs provides a method for reducing the bill of material by more than $10 per system.
Although dense routing of the memory subsystem is the primary motivation for HDI technology, other system motherboard components also require HDI for routing reasons. One such component, the peripheral control hub (PCH), is offered in two package versions: a 25 x 25 mm footprint and a 22 x 22 mm small form factor (SFF) version. The SFF PCH is typically used in the Ultrabook platform to minimize the PCB area required for this large component. Because of its aggressive ball pitch—a 0.65 x 0.8 mm grid with interdigitated balls—the SFF PCH footprint requires the blind vias enabled by HDI processing to breakout the signals from the SFF PCH footprint. The smaller footprint is needed to meet the overall size requirements for the motherboard. To eliminate HDI technology, the area required for the memory subsystem must therefore be reduced enough to allow the use of the larger PCH.