TORONTO — High bandwidth memory gained some momentum last week as Samsung Electronics announced it started mass production of its second-generation technology, dubbed Aquabolt.
Designed for use with next-gen supercomputers, artificial intelligence (AI) and graphics systems, Tien Shiah, product marketing manager for High Bandwidth Memory at Samsung, said the 8 GB High Bandwidth Memory-2 (HBM2) offers the highest DRAM performance levels and the fastest data transmission rates available today with a 2.4 gigabits-per-second (Gbps) data transfer speed per pin at 1.2V. That's nearly a 50 percent performance improvement per package, he said, compared with Samsung's previous generation HBM2 package, Flarebolt, with its 1.6Gbps pin speed at 1.2V and 2.0Gbps at 1.35V.
In a telephone interview with EE Times from CES, Shiah said a single Aquabolt package will offer a 307GBps data bandwidth, achieving 9.6 times faster data transmission than an 8Gb GDDR5 chip, which provides a 32GBps data bandwidth. This means using four packages in a system will enable a 1.2 terabytes-per-second (TBps) bandwidth, he said, hence the overall system performance by as much as 50 percent.
A need for even faster access to data is driving adoption of HBM, said Shiah, particularly AI and machine learning algorithms. He added that HBM is the fastest form of DRAM on the market and offers both space and power savings. "You have it all in a single package,” he said.
However, Shiah said, this configuration requires expertise in design as HBM has to be integrated in the ASIC with silicon interposers.
Samsung also made use of its expertise in through-silicon via (TSV) technology related to thermal control, said Shiah. A single Aquabolt package consists of eight 8Gb HBM2 dies, which are vertically interconnected using more than 5,000 TSVs. Samsung also increased the number of thermal bumps between the HBM2 dies, which enables stronger thermal control in each package. Finally, there's protective layer at the bottom, which increases the package's overall physical strength.
Samsung said its 8 GB High Bandwidth Memory-2 (HBM2) offers the highest DRAM performance levels and the fastest data transmission rates available today with a 2.4 gigabits-per-second (Gbps) data transfer speed per pin at 1.2V.
HBM is often discussed in the same breadth of Hybrid Memory Cube (HMC) as an avenue for getting the fastest DRAM performance. There's not a great deal of difference between the two technologies, but given that HBM has been getting wider adoption, it may win out over HMC, just as VHS eclipsed Beta.
But even if HBM is the winner, it's still a niche technology, said Jim Handy, principal analyst with Objective Analysis. "I do see it eventually becoming mainstream, but today it's really expensive technology. That's because TSVs are expensive thing to put silicon wafers,” Handy said.
HBM has been somewhat stealthy to date, normally used in Nvidia and AMD graphics cards, where a whole lot bandwidth needs to get to the GPU, he said.
While TSVs do offer advantages over wire bonding, especially when your can put 5,000 TSVs on a chip, it's still relatively expensive, Handy said. "There's a hump to get over with any technology like this," Handy said. "The price will go down as volume goes up, but high prices keep volume down.”
That's why it's still mainly found in high-end GPU cards, Handy said, and moving into some supercomputers now, and will be in standard servers at some point.
"It's hard to tell what's going to drive high enough volumes to get the costs down,” Handy added. "Everybody argues about how AI is going to fit into picture.”
There's a big push toward FPGAs for AI that provide competition for GPUs. Handy said smartphones could be target markets for HBM in the long run given that GDDR5 is finding its way into those devices.
As for the HBM vs. HMC debate, the difference is the logic chip at the bottom, said Handy, and although Intel went with Micron's HMC technology, it developed its own logic standard. And like Samsung, the remaining DRAM maker, SK Hynix, has gone the HBM route. He could see Intel moving from its HMC variant over to HBM, and ultimately, Micron.
"There's not much difference between HBM and HMC," Handy said. "There's not a lot lost by Micron if it has to convert.”
—Gary Hilson is a general contributing editor with a focus on memory and flash technologies for EE Times.