Low Latency DRAMs, of one form or another, have been on the scene for nearly ten years. What remains before the technology can flourish is a general agreement of where LLDRAMs fit in the larger memory landscape.
Low Latency DRAMs, of one form or another, have been on the
scene for nearly ten years. Potential users have remained skittish about
adopting them because of a sketchy history that has featured incompatible
products from multiple vendor camps, the entry and disappearance of multiple
vendors, and limited sources of supply.
The arrival of multiple new vendors in the market is helping
to assuage some of the uncertainty associated with Low Latency DRAM. What
remains before the technology can flourish is a general agreement of where
LLDRAMs fit in the larger memory landscape.
From 1975 to 1995 there were three primary volatile discrete memory technologies that were mainly serving the computer market: DRAM, Slow SRAM, and Fast SRAM. From a relative price-per-bit (PPB) perspective, the relationship between these technologies remained constant. At any point in time a user could expect Slow SRAM to cost about 10X the price-per-bit of a commodity DRAM. The same number of Fast SRAM bits would cost approximately 10X more again, and on-die memory bits would cost another 10X beyond that. Today we have a new crop of volatile memory products that provide blindingly high bandwidth interfaces, but when viewed from a latency perspective, they still occupy the same relative latency and the same relative cost per bit niches where they have always been.
The answer to “When is a DRAM not a DRAM?” from a historical perspective is “When it sits in a slow SRAM price/performance niche.” But can those costs be justified?
To make a RAM faster in a given process technology, memory chip designers break the memory array into smaller and smaller blocks to shorten signal lines, reduce parasitic loading, and to allow more rapid amplification of the tiny signal being detected on the memory cells. Smaller blocks mean more decoders, more amplifiers, more drivers, more routing, and therefore larger die. Low Latency DRAMs beat commodity DRAM latency by increasing die size and product cost. That is why Low Latency DRAMs now occupy the price/performance niche once occupied by Slow SRAMs – despite the fact that they are built with DRAM-type bit cells.
Given recent history, networking designers have every right to wonder if Low Latency DRAM technology has staying power in the networking market. We have a two decade long existence proof for a RAM product type with their price/performance characteristic being viable in the computer market. We can expect the same thing to happen in the networking memory market. As long as Low Latency DRAM products stay within their mid-latency niche, the cost to produce them can remain within the traditional Slow SRAM cost-per-bit range, and the market for them will solidify.
So when is a DRAM not a DRAM? When it performs like a slow SRAM? When it sells for the same relative price-per-bit as Slow SRAMs? Yes, both. But perhaps the best news for networking system designers is the Low Latency DRAM banner has been taken up by the memory suppliers who understand how to serve the high performance, high mix, long life-cycle world of networking memory. The new LLDRAM vendors on the scene are SRAM vendors who specialize in networking memory. The best answer may be: When the vendors support it like a networking SRAM.
David Chapman is VP Marketing and Applications Engineering at GSI Technology, Inc.