BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
As high-performance computing and big data applications take center stage in the enterprise data center, memory technologies emerge as a way to adapt to changing needs. Software-defined memory and persistent memory technologies offer increased performance and speed, enabling IT to keep up with the demands of high-performance computing. However, many of these new memory technologies aren't conducive to a plug-and-play approach and require changes to your software. Before you take the leap into these emerging technologies, understand how to use them optimally.
Get to know persistent memory
As CPU performance grows steadily and storage performance remains flat, persistent memory aims to close the gap in the data center. Persistent memory is technically persistent storage that you can use as memory due to its low latency. From a programming perspective, it appears as a byte-addressable medium. It also uses load/store for data access rather than read/write and has predictable latency.
A nonvolatile dual in-line memory module (NVDIMM) acts as both persistent memory and block storage. There are three types of NVDIMM. NVDIMM-N is memory-mapped dynamic RAM (DRAM) with onboard flash, NVDIMM-F is memory-mapped flash and NVDIMM-P combines memory-mapped DRAM and memory-mapped flash. But to take advantage of these persistent memory technologies ensure that your software is compatible. Applications use block storage for data persistence, so you must modify the applications or use a file system between the persistent memory and application.
Persistent memory probably won't have widespread adoption until around 2019. Both Linux and Windows Server 2016 have strong support for the technology, but the cost of NVDIMMs will need to drop before most enterprises take the plunge into persistent memory.
Software-defined infrastructure extends to memory
After adopting persistent memory, the next logical step is to use it to complement other persistent storage, and software-defined memory plays a role in that process. Software-defined memory handles memory and storage as a service, using software tools to move data across physical devices. It adds new memory tiers to DRAM that are supported by software, allowing users to flexibly divide the total memory and present a set of services to the host system that are available across the cluster network. It's currently a nascent concept, and the software being developed for software-defined memory is associated with hardware from vendors such as Intel and Diablo Technologies. However, most of the code should run on a mix of hardware as the technology evolves.
Software-defined memory can improve with data services, such as compression, replication and encryption. Compression can increase the space of persistent NVDIMM by five times, for example, and reduce the time to transfer it to memory. The power of software-defined memory heavily relies on the data service tools that vendors will develop.
Use tools to optimize software-defined memory
In the future, it will be important to optimize storage and memory infrastructure for software-defined memory technologies, such as NVDIMMs. For example, some servers could have 500 GB of RAM and 10 TB of NVDIMM mapped to memory addresses. An OS driver identifies the NVDIMM space and commits it from the memory pool, which prevents the OS from interfering with persistent data and enables the data to persist across power-down.
Add software-based tools, such as a persistent RAMDISK (PRAMDISK), to the driver, which will capitalize on the NVDIMM space. PRAMDISK implementation is simple, and the tool increases the performance of the storage space far more than a Peripheral Component Interconnect Express solid-state drive (SSD). To optimize performance even further, use performance tuning for transfers from PRAMDISK to DRAM with a single instruction. Caching tools can extend memory, turning the 500 GB into about 4 TB. Field-programmable gate array assists for processors can compress the data and double the effective space of the NVDIMM.
New server memory types fit for high-performance computing
The software-defined memory model extends to different memory structures, driving the development of technologies such as Hybrid Memory Cube (HMC). HMC creates 3D arrays with serial memory access by stacking memory chips into vertical assemblies. Server manufacturers can install each assembly close to the processor, a near-memory design that provides higher performance than a far-memory approach. HMCs offer higher bandwidth, less energy use and a smaller physical footprint than double data rate 3 memory devices.
HMCs compete with another emerging server memory type, High Bandwidth Memory (HBM), a high-performance interface that aims to bring memory devices closer to the CPU or GPU via an interposer. Like HMCs, HBM modules have higher bandwidth and operate with less frequency and power than conventional memory technologies.
Vendors race to create the best flash alternative
While a NVDIMM offers faster data access and increased system performance than persistent DRAM, it may barely compare to 3D XPoint -- a collaboration between Intel and Micron that acts as both an SSD and a new class of NDVIMM. 3D XPoint's SSD model easily integrates with existing servers and storage, but its NVDIMM model doesn't have available compiler, link loader and OS support yet. The NVDIMM version of 3D XPoint has 20% of the speed of standard volatile DRAM, but your software will need to adapt to the technology. To take advantage of the 3D XPoint NVDIMM, you must have persistent memory class in the compiler and guaranteed atomicity while writing data.
However, the promising path of 3D XPoint hit a snag: The controller for the memory in Intel's 3D XPoint version, called Optane, didn't meet its performance goals. It only offered four times the speed, as opposed to a 10x speed boost in Micron SSDs. However, Samsung and Western Digital's SanDisk is working on flash replacement products that will directly compete with 3D XPoint.
DRAM will spur future hardware changes
Explore the benefits of NVDIMMs
Why replacing hard disk drives with SSDs won't solve all storage issues