What comes after today's fastest interfaces? Jim Handy talks about Wide-IO, hybrid memory cube, and high-bandwidth memory.
As I write this we are on the brink of a conversion from DDR3 to DDR4 DRAM in PCs and servers. Cellphones and tablets use separate interfaces, but in all cases the industry anticipates an upcoming juncture when it must depart from these interfaces and migrate to something faster with lower power requirements. What comes after today’s fastest interfaces? Most currently proposed solutions are based upon changes to DRAM packaging technology.
A number of such alternatives are under development. Wide-IO promises to provide a wider, faster bus for communication between a cellphone baseband chip and a single DRAM, while reducing interface power consumption to a fraction of today’s level. This standard uses a DRAM stacked atop the processor chip communicating across through-silicon vias (TSVs) either directly or through an interposer. These signals don’t have to drive an external package pin or a trace on a circuit board, and they don’t require electrostatic discharge (ESD) protection, all of which is highly capacitive, and this significantly reduces signaling power requirements. In a cellphone, that translates to longer battery life.
The Hybrid Memory Cube (HMC) and a competing technology called High-Bandwidth Memory (HBM) are aimed at computing and networking applications. These approaches stack multiple DRAM chips atop a logic chip. All bus interface functions are the responsibility of the logic chip, and the DRAMs communicate with it through thousands of TSVs. As with Wide-IO, that means very fast signaling with low power dissipation.
Of these three technologies, the hybrid memory cube is furthest along and has been demonstrated at tradeshows in working systems, but the other technologies are not far behind.
All three technologies are aimed at point-to-point DRAM-processor interfaces without the standard upgradeability that today’s DIMMs provide. Since the industry is likely to stop upgrading systems by adding DRAM and move toward upgrading on-board NAND flash (or its successor) instead, a point-to-point interface is a very acceptable change to computer design.
The HMC or HBM approach will see initial use in supercomputers in the near future, but it is unlikely to reach widespread use until DDR4 reaches its end of life, and that could be five years from now. I would expect to see serious conversion to one or both of these technologies in the coming decade.
It appears that it may be a couple of years before Wide-IO is adopted in portable electronics, but once that occurs, the industry would be likely to undergo a very sudden conversion to that standard. I am not holding my breath for its acceptance, though. Back in the early 1990s there was a big wave behind the multichip module (MCM). This technology never took off because low-priced SRAMs were forcing manufacturers to scrap processor chips priced an order of magnitude higher. With Wide-IO we have a similar problem: A $1 DRAM will be bonded to a $35 baseband chip. This may prevent the technology from being adopted at all, and we may instead see some ingenious developments to allow less exotic approaches to achieve the speed and power requirements of tomorrow’s cellphones.
This series of three posts (When Will NAND Flash Be Replaced by an Alternative Technology?, SanDisk Optimizes SSDs for Enterprise Workloads, and this one) examines three very important changes that will develop over the next few years: alternatives to NAND flash, flash as memory, and faster DRAM interfaces. There are others that are worth exploring that I will save for future posts.
All of these will take some time to really fall into place, but I am sure that, once we have converted to these technologies, we will wonder how we lived without them.
This is a very interesting time to be in the memory business.