Boise, Idaho -- The Hybrid Memory Cube Consortium (HMCC) is making steady progress on bringing one of the most discussed heirs to DDR4 closer to reality by releasing an update to the HMC specification late last month. The first draft of the second-generation specification supports increased data rates that advance short-reach (SR) performance from 10 Gbit/s, 12.5 Gbit/s, and 15 Gbit/s, up to 30 Gbit/s. The update also acknowledges existing industry nomenclature by migrating the associated channel model from SR to very short-reach (VSR), while the ultra short-reach (USR) definition also increases performance from 10 Gbit/s up to 15 Gbit/s. The Gen2 specification doubles the throughput of the original specification.
Mike Black, technology strategist at Micron, one of HMCC’s founding companies, with Samsung, said the goal is to release a final Gen2 specification in May. In the meantime, HMCC is open to adding more adopters to its ranks, which already number at more than 120, with a core group of developer members that consist of Micron Samsung, Altera, ARM, IBM, Microsoft, Open-Silicon, SK Hynix, and Xilinx.
Black said the call for more memory bandwidth is coming from a number of fronts. “We’re hearing a lot from CPU partners that they are migrating performance requirements faster than memory can keep up,” he said. “It’s created a memory wall that’s inhibited them from building a system that can actually utilize and optimize all of the memory performance a CPU can have.” High-performance computing, big data analytics and more video over networks are also creating pressure to improve memory bandwidth.
The current paradigm has become increasingly complex, said Black, and HMC is a significant shift. It uses a vertical conduit called through-silicon via (TSV) that electrically connects a stack of individual chips to combine high-performance logic with DRAM die. Essentially, the memory modules are structured like a cube instead of being placed flat on a motherboard. This allows the technology to deliver 15 times the performance of DDR3 at only 30% of the power consumption.
A big focus of the HMC technology, said Black, is total cost of ownership -- providing high bandwidth with low power. Micron has released the first commercial HMC sampling in a 2 GB density with a 160 Gbit/s of memory bandwidth while consuming 70% less energy. Meanwhile, Altera has shipped evaluation boards and demonstrated interoperability between HMC devices and next-generation 20nm and 14nm FPGAs and SoCs, which allows its customers to evaluate and develop HMC-based, high-performance systems, while Xilinx’s UltraScale FPGA architecture devices were designed to support this specification and are shipping now.
HMC is not the only contender that might take the reins from DDR4, says Jim Handy, principal at Objective Analysis. He said High Bandwidth Memory (HBM), which is being championed by SK Hynix, is also an option, and has many similarities. HBM also uses TSV, as well as Wide IO technology. JEDEC’s DRAM Memories Subcommittee has been working on definitions of standardized 3D memory stacks for DDR3 for nearly four years. Last October it published "JESD235: High Bandwidth Memory (HBM) DRAM," which uses a wide-interface architecture to achieve high-speed, low-power operation.
Black said HBM will require a new manufacturing infrastructure and supply chain to support production, but doesn’t see it competing with HMC in the near future. Recalling Rambus’ efforts to become a dominant interface in the 90s, Handy said there are corollaries in that Intel’s strategy will be a determining factor given the primary market for HMC will be systems, servers, and PCs with Intel processors.
Handy predicts that computing architecture will change quite a bit by the time HMC sees widespread adoption, including the incorporation of flash memory into systems as modules rather than as an SSD.