But, we do have DIMM and SSD built into one entity. In fault tolerant data centers the SSD is not backed up locally. If your machine goes down you have to assume it is days before it is scheduled for repair. Backing it up onto Flash on the same machine is kind of pointless, it is still inaccessible. A much more useful approach is to ensure it is written on a remote SSD if you want to be fast and durable, where the other SSD is sharing minimal common points of failure. Depending on how many 9's reliability that might even be in a remote location. The "one entity" is a data center or a cloud, not a DIMM.
Adding Flash to the DIMM barely moves the needle for 9s availability. If it were really cheap it would be attractive for that marginal shift. But it is not cheap. For the same money the reliability can be and is designed into data centers at higher levels. People go at least read up on Hadoop (which is a little dated, but was essential since it scaled reliably) or look at the literature for the Open DC. Reliable systems are no longer built wiht gold plated super redundant components. They are built with sensibly reliable value for money components and smart failover strategies. Because that works in worst cases, and if you have the worst cases covered your job is done. Extra levels of redundancy are just extra levels of complexity, in themselves a reliability threat.
And outside of data centers who would be buying new architectures? Change comes where the fast growth is. So, this stuff is clever but the first question is, who will use it?
It is hard to see any real advantage to separate non-volativity for DIMM modules. Battery backup for the entire system (if not the whole rack of systems) is already well-established, and maintaining the memory in RAM separatel from the rest of system state would require at least opertaing system support, if not specialized application code. It does not seem all that useful to me.
My company, McObject, has built in support for NVDIMM in our in-memory database system, eXtremeDB.
Battery backup of the whole system does not protect against, e.g. a kernel panic. Further, if your system goes down due to a kernel panic, it's not 'days to repair', it's back up in minutes. But, without NVDIMM, the contents of memory are wiped. and you have to re-provision the in-memory database from some relatively slow source (and even SSD or PCIe Flash is slow, on a relative basis).
IMHO, NVDIMM is not about 'availability', it's about 'durability' (the 'D' in database systems' ACID properties). NVDIMM is the only way to get in-memory DBMS performance with the level of durability that is assumed for persistent-storage DBMS.
There are free (no registration required) white papers on our website that discuss/compare all of this, at length.
Netlist, one of the founding members of this SIG noted in its latest quarterly conference call that it was chosen as a potential supplier with one other company to supply NVDIMMs for a larger Hyperscale customer retrofit project beginning in 2015. The customer has 1M servers, so they must see significant value in this new technology of this occurs.
As we unveil EE Times’ 2015 Silicon 60 list, journalist & Silicon 60 researcher Peter Clarke hosts a conversation on startups in the electronics industry. Panelists Dan Armbrust (investment firm Silicon Catalyst), Andrew Kau (venture capital firm Walden International), and Stan Boland (successful serial entrepreneur, former CEO of Neul, Icera) join in the live debate.