Its interesting to look at the status of alternatives to DRAM among memory architectures. up to this point, they weren't getting much attention but now that they have been whittled down to just a few, those alterntives are gaining visibility and credibility as it is hard to deny what has been proven over the course of time.
It would be interesting to know in quantitative terms what "in low volume production" means and even more intriguing what does "it has its place, it's its own thing." actually mean in the light of the following.
By late December 2013 both the 128Mb and the 1Gbit MCP had been quietly removed from the Micron product list on their web site. It was reported elsewhere* that Micron had indicated that their earlier generations of phase change memory were no longer available for new designs or for those wishing to evaluate the technology and the focus for PCM had moved to developing a new PCM process, in order to lower bit costs and power while at the same time improve performance. What then is the PCM device type that is in low volume production, why would low volume production be maintained for devices that are no longer available to potential customers and have the bit cost, power and performance limitations indicated? If PCM is not suitable for NAND or DRAM replacement, for what then is it suited?
Micron also have a paper a paper co-authored with Sony at ISSCC 2014 that reports a 16Gbit ReRAM based on a 27nm process, one wonders why that not get a mention along with STT/MRAM as one of the whittled down list of emerging memory types with future potential on which Micron are working?
Micron has a tight NDA for the HMCs. So prices are unlikely to leak in the near future. But long run, it shoudl be cheaper than regualr DDR (in terms of overall system costs) since there is a reduction on the logic component, lower pin count and reduced PCB size. We are spec'ing our server class CPUs to use only HMCs (but then my target date is 2017) since I believe, serdes base links are the way to go. A great advantage you get is sharing Physical ports with I/O channels. So you can treat a SERDES lanes as memory or I/O and switch the protocol handler inside the processor.
Probably one SERDES bank will have to be dedicated to memory but others can be switched on the fly.I have presented this at a conference too and have started host side HMC silicon implementation too. These will be released into open source.
But we want to go one step further. It would make sense to shift the MMU to the HMC so that a HMC block can provide a section of the total system memory to the CPUs attached to it. This work well in single adress space OSs which can have fully virtual caches.
So in a sense, the HMC boots firsts, sets up a virtual memory region and then the CPUs attach to these regions. Ance once you have logic inside DRAM, you can try out all kinds of things inside memory, data ptrefetch, virus scanning, ... The lists goes on.
I could keep going but this I hope this shows why a SERDES based memory architecture can revolutionize system design. It is a lot more than optimizing DRAM or reducing power.
We are doing sim. runs to see if these architectures have any merit. I am also partly sceptical (even though it is my proposal !). But I guess the sim. results will answerall the questions.
If someone wants to have a more detailed discussion on these, send me an email.
Note for all you patent trolls out there, with this post these ideas are hereby put into public domain ! And this definitely constitute prior art. So no patenting.
Lot of hush-hush about memory controller ownership in HMC. Intel of course wants to put all the ownership in its CPU, as would anyone who integrates an on-chip memory controller into the main processing unit. It's a big factor in chip design strategy. Designing with HMC-based controller is actually a big risk.