This month saw the long-anticipated release by the Hybrid Memory Cube Consortium of the final version of the initial HMC interface specification. The HMC interface involves a logic layer that can connect up to eight memory die with through-silicon vias (TSVs) to reach speeds of 15 Gb/s. Version 2 of the standard, currently on track for release in early 2014, aims for a blistering 28 Gb/s. What does this mean for design engineers, the competitive landscape and personal computing? We asked Mike Howard, senior principal analyst for DRAM and compute platforms for IHS iSuppli.
Kristin Lewotsky, editor, Memory Designline: The standard is finally out but development has been underway for some time. How do you expect the market to develop? Mike Howard: I think we’ll see the first trickling of products this year with product shipping next year. It’s not going to take the market by storm—there’s DDR4 coming along, and we’ll see samples of that this year and product shipping next year, so it’s after that that we’ll actually see the sweeping implementation.
Who are the early adopters? [Enterprise or data center] switches need to be able to buffer the traffic as they read the header to figure out where to send packets. As we go to 40 Gigabit Ethernet and 100 Gigabit Ethernet and beyond, it’s not a 2.5X memory bump. It’s not a true exponential but it is more than a simple multiplicative memory need. The ability to push a massive amount of data into DRAM for a fraction of a second to figure out where to send it is something that really is a wall right now—not the traditional memory wall but it is a process-limiting piece of hardware and that is where HMC is going to be huge. Then five or six years down the road, HMC technology is going to be in front of a lot more DRAM—not just high-performance compute or network switches but mainstream platforms, whether they be in your pocket or on your desktop.
After DDR reaches its limit, the HMC takes over? Exactly. I’m not sure how much willingness or appetite there is for DDR5. Particularly if these first-generation HMC products do well, it could really jeopardize further DDR development, at least in the commodity DDR4 to DDR5 space. It’s less clear whether it would have the same impact on the low-power front.
In 2011, Intel Labs collaborated with Micron to build an HMC prototype, but they’ve been quiet on the subject since and they aren’t a part of the consortium. Any ideas what’s happening there? I think they’re keeping their own counsel. The actual interface with an Intel chip is a closely held interface that Intel is not going to throw open to just anyone and I think that might be part of the holdup. But HMC and ARM is a combination of acronyms that I hear a lot and that just paints Intel into a horrible corner, especially if and when HMC does take off, if Intel is somehow not part of the party. They’re already fighting enough battles right now. I don’t think they want to be handicapped further in servers, so while we haven’t heard a lot about Intel [and HMC], I don’t think there’s any indication that they’re not going to widely support it.
How do you think HMC market dynamics will play out? For the DRAM guys, this is going to end up just being another commodity and it doesn’t really change the competitive [situation]. The way you will continue to win in DRAM is to execute your process transition, shrink your die and improve your performance. HMC will provide a nice little boost but the idea that you’re going to be able to charge an inordinate amount for HMC is out the window. Samsung, Hynix and Micron are all on board and if you try to jack the price up beyond where the market is, your competition is going to kill your business.
But 3-D fabrication—through-silicon vias (TSVs), stacking—is more complex and expensive to execute. You’re absolutely right. That’s why the early adopters—the network guys and also the high-performance computing early server guys—are going to be paying for a lot of tooling over the next two or three years, whether they realize it or not. Three or four years from now, TSV is going to be a very well-understood technology for the DRAM guys and your ability to stack will be a competitive advantage or disadvantage.
Fewer players, less inventory—has the DRAM market as a whole become more rational in the past few years? We’re down to three players right now and it’s looking like a pretty balanced industry and one where they should make some money. There’s a lot of emphasis on other areas of the silicon, particularly in the NAND area and that should lead to a healthier industry. No one’s getting into DRAM, whereas historically, we’ve seen successive countries get in—Japan, then South Korea, then Taiwan. The big question is does China ever get into DRAM? There were rumblings of that a few years ago but it never came. They had a great opportunity last year—they could’ve bought Elpida but they didn’t—so right now we don’t think that will happen.
Speaking of NAND flash and Elpida, do you still see Micron converting some of Elpida’s DRAM capacity to flash in the aftermath of the acquisition? Since we last talked, Micron bought Elpida and the other half of the Inotera wafers, so their DRAM footprint is going to be two or three times bigger than it was last year. They are also trying to increase their share of the NAND flash market. The great thing about NAND is the demand profile. The industry definitely needs to add wafers, Micron’s competition is adding wafers in NAND flash, so I still believe that they will shift some production out of DRAM and into NAND.
Long term, what do you think the impact of HMC technology will be? [Today], you use a lot of cycles because you’re waiting for memory to return data. HMC helps get everybody over the memory wall and that can have a measurable impact on overall compute utilization. In personal compute systems, performance has [previously] gone up incrementally, not exponentially. What I’m hoping for is that HMC does offer sort of a step function performance boost—we’re not performing improving performance 25% this year, we’re improving performance 10 X. What it boils down to is that you suddenly have a lot more cycles available to you because you’re not sitting there waiting for the memory to return data. It will be interesting to see what happens.
The story with Intel is they will do their own thing, and can stay independent of HMC. Check out the latest word out on Haswell's L4 cache - it's actually DRAM made by Intel itself (on its 22 nm trigate process).
I don't think HMC benefits or benefits from NV since larger latency and less frequent access may be expected. But if there is a new type of memory that is very much like DRAM working memory (maybe STT?) perhaps it could be considered as well.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.