HMC, HBM and other technologies are certainly very interesting but I dont see how they would replace commodity DDR4 DRAM anytime soon. Am I missing something here?
The cost/bit of DDR3/DDR4 will continue to be much lower than what HMC/HBM will offer for a long time. HMC and HBM do offer interesting possibiities with on-die memory stacking, fast look-aide memory but will they replace DDR4 a the main memory for processors?
3% * 4100 = $123 = $7.6 per GB of power savings (without speaking about other benefits). Considering the fact that DDR(consumer- serever costs more) costs $11.5 per GB , those power savings look like a good reason for adding an HMC to a cloud server even as main memory.
Power costs at 10 cents/kWh are about 80 cents per year per watt. Many data centers have lower contracts than that, but should add in overheads to build the power distribution and substation so it is on balance a fair estimate. A 200W server (which includes the overhead ratio for a modern DC) is costing about $160 per year for power. This puts the cost of power under 10% of operating costs.
The bigger cost is opportunity cost. A DC is generally built out to the max power the site can obtain, so every watt in that 200W must be optimized. If you add 1W in DRAM then you subtract 1W from something else. Each W thus has an opportunity cost of about 0.5% of the server which more like $10 per year.
So, you have the right conclusion, but it can be helpful to have a better model of why.
Components which save power in the server market are definitely valuable. Each W saved is worth dollars, so long as performance is not compromised.
I take it that HMC is lower in power which would reduce opex. However the cost of memory would be higher. Any data on how much power say 16GB of HMC would burn vs. DDR4. I don't think the aquisition cost of HMC could come close to commodity DDR4 memory. Also given the more exotic (for now) manufacturing techniques like TSVs etc are bound to hurt yield, further increasing cost.
I believe the cost of HBM will get lower but in order to do so it will need significant volumes and improvements in manufcaturing to get there. This can only happen if the large CPU vendor (Intel) gets on board. Without that this cannot replace DDR4.
2GB would make an interesting LLC (Last Level Cache), you could use a large cache line size (512-4K bytes) to optimize the transfer between DDR and HMC. I suspect you would still see most of the power savings as the DDR memory lines would be idle most of the time.
Another application could be as cache on a rotating media disk drive, the HMC could be glued right to the top of the disk controller.
The size of the modules (2GB) makes this look like a technology for local GPU memory or perhaps even for mobile devices. But servers? How does this add up to 128GB or more seen on modern servers? Is there something else in plan for that market, or does that remain DDR4 territory?
Since each hmc has 8 layers of dram, my guess is that for first version they prefer to use old manufacturing lines and get some money from them, instead of taking a risk a reusing new manufacturing lines.
The point is that the chip stack is overkill for large space. 100GB/s bandwidth per 2GB cube is way overkill for building a server with large memory - 50 modules like that, what host chip will have the interconnect or even need it? And you can't get much bigger cubes because that is the limit you get multiplying DRAM chip capacity x number of TSV layers possible. So, the whole thing looks optimized for small-memory scenarios.
Where is the server equivalent, or is this simply not coming to a server any time soon?
Even though you can chain them together, you are limited to 8 HMC parts per channel. So, the CPU will need multiple channels to support a large memory server. That's no problem - they already support multiple channels with far more signals required for DDR4 than for HMC.
The real problem is the size of the HMC. They are huge (31mmx31mm)! You can't cram enough of those on a motherboard or DIMM to get a server with 1.5TB of DRAM like you can with the 64GB DDR4 DIMMs that will be available later this year.
As TanjB pointed out, the bandwidth/GB just doesn't make sense with so little memory in a fabric. Why would I want to put 32GB on a server using HMC when I can get the same amount on a single DIMM - at lower cost and less physical space?
Until they actually get more GB/HMC, this looks like a great product for high speed switches and high performance FPGA-attached hardware accelerators, but not servers.
Look at Dell's and other's rack servers. You can drop 512+GB of memory in them today - and many do.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.