While average DRAM selling prices (ASPs) have leveled off recently, they are still the highest for the market since 2008. According to iSuppli (El Segundo, CA), the $3.03 ASP for all DRAM parts was the highest it has been since Q3 2008. A report from IT market analysis firm Research and Markets projected that worldwide DRAM revenue is expected to near $40 billion in 2010, up 81 percent from the $22 billion earned in 2009.
All of this is very good news for DRAM suppliers who remain highly focused on allocating their capacity and capital expenditures to the most strategic and profitable products. DRAM suppliers maximize their revenue through continual die shrinks to reduce costs and maximize output per wafer. They are typically focused on specific DRAMs that achieve the highest volumes and profit margins. For OEMs whose memory requirements and product lifecycles donít align with mainstream desktops, notebooks and servers this may not be good news. There are a many examples of networking, telecom, and industrial products that need to be refreshed or upgraded with more memory and/or require memory with a unique configuration but the memory is simply not available.
The transition to newer DRAM technologies and densities is increasing. Specific DRAM types such as SDRAM 512Mb, DDR1 1Gb, and DDR2 2Gb are EOLíd (end-of lifed) long before the end applications stop shipping. Initially two or three DRAM suppliers manufacture these types of DRAMs, but they drop off and shift their resources to the next newest technology. The initial outlook in designing these DRAMs in is promising, but over a shorter time than expected, they get phased out, leaving OEMs with only one supplier which can be very costly or forced into a printed-circuit board (PCB) redesign. While these products may still be profitable for DRAM manufacturers, they donít justify the allocation of newer processing equipment.
In the networking and telecom segments, deep packet inspection (DPI) capabilities are driving the need for memory with wider configurations such as x16 and x32. For mainstream memory used in desktops and servers this is not a requirement, so networking and telecom OEMs can be forced into adopting mainstream memory (and sacrificing performance) or using more costly niche memory technologies.
Stacking: The Next Best Solution
One way to address these challenges is to use the long-established industry solution of DRAM stacking. DRAM stacking leverages readily-available, longer-life DRAMs into higher density and/or specially configured stacks. DRAM stacking consists of using two off-the-shelf, 100% tested BGA or TSOP DRAMs. For BGA DRAMs, a PCB cavity is used to stack the DRAMs together and route the signals from one DRAM to another. A common example is shown in the diagram below.
I have been involved in stacked DRAM products on the supplier side since the early 80's. While the concept is quite viable, there are some points to consider.
1. Addressing. DRAMS use each address pin for two address inputs. When you double the density, you need a 1/2 pin more addressing. Either the memory module or the system must provide for the.
2. Fanin/fanout. By stacking, you at least double the drive requirements and the output loading of the module. If you add a buffer chip to take care of this, you add delays. For a buffered module, it's usually a clock cycle worth.
3. Cooling. The added height of the DRAM stack may interfere with the air flow over the module.
4. Cost. Cost is more than 2X the original because of the stacking costs, additional testing, any support chips, etc.
5. Source control. Module manufacturers can buy from a plurality of DRAM vendors making the control of vendor source and die revisions more difficult.
6. Reliability. There are more interconnections and 2X the number of die involved.
All that being said, stacking is sometimes the best or only way to go.
I am intimately familiar with the stacked TSOP DDR DRAM. I can say we've seen a high rate of manufacturing defects related to these parts - particularly, loose solder joints and shorts on the pins, and issues with the interposer PCB between the top and bottom chips in the stack. These stacked TSOPs are very hard to rework/repair when closely spaced together.
I hardly see this as a solution to the stated problems but aggravating them. Stacking is indeed the best way to clear out old but well-established products to make way for newer ones. But Nx stacking is Nx cost and Nx interface complexity, and the yield is harder to meet (on a totally new product). If you can put more onto the same chip, that is still more cost effective. Sometimes, that is not possible - then you can stack.
Join our online Radio Show on Friday 11th July starting at 2:00pm Eastern, when EETimes editor of all things fun and interesting, Max Maxfield, and embedded systems expert, Jack Ganssle, will debate as to just what is, and is not, and embedded system.