HMC, HBM and other technologies are certainly very interesting but I dont see how they would replace commodity DDR4 DRAM anytime soon. Am I missing something here?
The cost/bit of DDR3/DDR4 will continue to be much lower than what HMC/HBM will offer for a long time. HMC and HBM do offer interesting possibiities with on-die memory stacking, fast look-aide memory but will they replace DDR4 a the main memory for processors?
3% * 4100 = $123 = $7.6 per GB of power savings (without speaking about other benefits). Considering the fact that DDR(consumer- serever costs more) costs $11.5 per GB , those power savings look like a good reason for adding an HMC to a cloud server even as main memory.
Power costs at 10 cents/kWh are about 80 cents per year per watt. Many data centers have lower contracts than that, but should add in overheads to build the power distribution and substation so it is on balance a fair estimate. A 200W server (which includes the overhead ratio for a modern DC) is costing about $160 per year for power. This puts the cost of power under 10% of operating costs.
The bigger cost is opportunity cost. A DC is generally built out to the max power the site can obtain, so every watt in that 200W must be optimized. If you add 1W in DRAM then you subtract 1W from something else. Each W thus has an opportunity cost of about 0.5% of the server which more like $10 per year.
So, you have the right conclusion, but it can be helpful to have a better model of why.
Components which save power in the server market are definitely valuable. Each W saved is worth dollars, so long as performance is not compromised.
The size of the modules (2GB) makes this look like a technology for local GPU memory or perhaps even for mobile devices. But servers? How does this add up to 128GB or more seen on modern servers? Is there something else in plan for that market, or does that remain DDR4 territory?
Since each hmc has 8 layers of dram, my guess is that for first version they prefer to use old manufacturing lines and get some money from them, instead of taking a risk a reusing new manufacturing lines.
The point is that the chip stack is overkill for large space. 100GB/s bandwidth per 2GB cube is way overkill for building a server with large memory - 50 modules like that, what host chip will have the interconnect or even need it? And you can't get much bigger cubes because that is the limit you get multiplying DRAM chip capacity x number of TSV layers possible. So, the whole thing looks optimized for small-memory scenarios.
Where is the server equivalent, or is this simply not coming to a server any time soon?
Even though you can chain them together, you are limited to 8 HMC parts per channel. So, the CPU will need multiple channels to support a large memory server. That's no problem - they already support multiple channels with far more signals required for DDR4 than for HMC.
The real problem is the size of the HMC. They are huge (31mmx31mm)! You can't cram enough of those on a motherboard or DIMM to get a server with 1.5TB of DRAM like you can with the 64GB DDR4 DIMMs that will be available later this year.
A Book For All Reasons Bernard Cole1 Comment Robert Oshana's recent book "Software Engineering for Embedded Systems (Newnes/Elsevier)," written and edited with Mark Kraeling, is a 'book for all reasons.' At almost 1,200 pages, it ...