It is also important to keep in mind that this is not entirely a new topic in OS design; several other memory technologies used in the past, most notably core, bubble memory, and SRAM, were also non-volatile. The main reasons that dynamic-refresh memory became ubiquitous for primary storage after 1968 were operating speed and production cost; DRAM is cheap, low-power, and most of all very, very fast, and has gotten faster every year.
The main thing that NVDIMM does is make high-speed solid-state memory non-volatile, which is itself a feat. It does make moving towards
persistent-state operating systemsYou will note, first, that almost no persistent OS so far (e.g.,
CaprOS) has been built exclusively on non-volatile memory; you still need to be able to boot from cold in cases where, for example, memory inconsistencies arise.
Second, note that in none of those cases was there any consideration to eliminating secondary storage entirely. Even NVDIMM is not really stable enough for long-term storage (nor is flash-based bulk storage, for that matter, which is one of the reasons SSDs haven't driven disk drives out of the market yet despite the drop in price), so archival storage will still require hard disks, optical disks, and maybe even tape backup for staged archiving. While it is likely that laptops and even many desktop PCs (which are becoming a niche item anyway as most people use tablets for everything and put their data on
snicker 'cloud' servers) will drop the use of hard disks, they will probably keep flash drives for cold boots, while server environments will simply treat NVDIMM as another stage to their staged archival systems.