Everyone seems to be backing off from putting a stack of memory chips directly on top of a processor ( per the original JEDEC wide I/O spec ) because of yield, logistics, KGD, thermal -- whatever.
But at least by stacking DRAM chips w/ TSVs they are shrinking the distance btwn individual chips on DIMMs and just about eliminating sync-ing / latency due to chip to chip propagation delay.
both Micron HMC and now HBM ( its a JEDEC std. not just SK Hynix ) put the memory stack off the processor but on the same PCB, or substrate / interposer be it organic / Si. So the resut is either 2-d or 2.5-d with the memory in 3-d, not a pure 3-d
Micron's HMC memory chip stack includes a controller logic chip - all vertically connected by Thru Si Vias (TSVs). The stack sits off the CPU and is connected to it by SerDes.
At least right now, its aimed at the high end, FPGAs, Network Controllers that sort of thing
A certain mass market GPU vendor did n't like this configuration ( not willing to pay for the controller chip )
SK Hynix must have heard that complaint.
So their HBM is just a stack of memory ( w/ the controller chip optional ) all hooked up by TSVs, sits off the CPU / GPU, connected to it by parallel I/O with regular drivers ( as it stands now ) with the bandwidth 20 % slower at 128 GBps.
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.