Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Software-defined memory spurs mix of IT challenges, opportunities

The expanding role of NVDIMM and DRAM and the emergence of software-defined memory will bring more flexibility to enterprise data centers -- but not without some challenges.

Just as the IT industry began to accept the idea of software-defined infrastructure, along came another new software-defined...

technology: memory.

At a basic level, software-defined memory takes dynamic RAM (DRAM) and adds new memory tiers. These new tiers -- such as Intel's Optane -- and the DRAM space are supported by software, which enables users to flexibly partition the total memory, present a set of services to the host system and make those services available across the cluster network to other servers. This model creates new opportunities for IT teams, as well as some new challenges, as the server architectural choices expand.

In addition, software-defined memory might change the way admins plan installs and tune workflows, as well as influence how programmers rewrite legacy apps to take advantage of new features.

The role of NVDIMM in software-defined memory

Non-Volatile Dual In-line Memory Modules (NVDIMMs) are flash or Optane memory modules that are mounted onto DIMM form factor cards and use the memory bus for access. On the surface, this new memory is just a fast solid-state drive (SSD), but the NVDIMM software ecosystem has expanded to look at DRAM extension via a caching model. Now, software is going even further, with ways to share the NVDIMM space across servers to build a "super-cache."

The driving force behind super-cache is remote direct memory access (RDMA), the very low latency memory-to-memory transfer mechanism seen in Ethernet and InfiniBand. This has surfaced in the NVDIMM space through a variety of startup SSD products based on NVMe over Fabrics (NVMe-o-F). RDMA makes accesses happen in microseconds and uses little CPU resources to accomplish the transfer.

Hyper-converged systems can take advantage of this huge boost in performance to build more powerful clusters, while database applications spread across multiple servers can now share all the data as a pool. As database applications begin to use terabytes of NVDIMM in each server, the performance and latency profiles will improve.

Still, NVDIMM technology is relatively new, and this is especially true of the software. Since the code is still evolving, expect more features in the future. For example, current NVMe-o-F uses a block I/O approach. This replaces the old SCSI file stack and is much more efficient -- but still operates by transferring 4 KB blocks of data. It also uses a read-modify-write approach that adds the delay of an extra I/O to process, so it takes more than 8 KB of network transfers to change just one byte.

Some NVDIMMs -- and most likely Optane in the future -- can write a single byte of data. This opens up the option to use single register memory commands instead of code to handle that 8 KB of transfers. This is blindingly fast in comparison to the block I/O approach. Other companies will support variable block size -- either as transfers to fixed-block architecture or as a mechanism to store data in a key storage model.

Many of these options will coexist with other software-defined technologies, including storage and networking, as well as orchestration. That is where we see the rise of software-defined memory. In the not-too-distant future, these memory buses will contain a complex meld of access protocols and meet a variety of use cases.

Predictions and recommendations for software-defined memory

In 2018, for example, some servers will likely have 500 GB of DRAM and 10 TB of NVDIMM space that is all mapped on to memory addresses. An OS driver will identify the NVDIMM space and commit it from the memory pool. This prevents the OS from stepping on the persistent data and also allows the data to persist across power down.

To capitalize on that NVDIMM space, add software-based tools to the driver. A persistent RAMDISK, or PRAMDISK, is easy to implement and enables a storage space to perform four times faster than a Peripheral Component Interconnect Express SSD. Use performance tuning for transfers from PRAMDISK to DRAM with single instruction, multiple data instructions rather than traditional memcopy, which will make a huge difference in performance. You can share PRAMDISKs via RDMA across the cluster.

Use a caching tool to effectively extend memory and turn the 500 GB in the model into the equivalent of about 4 TB. As we move toward field-programmable gate array assists for processors, it's possible to compress the data as it moves to NVDIMM and double the effective space.

In 2019, changes to the OS, link loader and compilers will allow byte-addressable memory to coexist with the other forms. This too can be shared across the cluster. The major change here is that apps will need to change -- in some cases, drastically -- to take advantage of the much faster storage paradigm. Expect databases like Oracle to lead the way with "under-the-hood" changes that lead to performance gains.

More DRAM innovations to come

At this point, it's fair to say that some memory is already software-defined, but only the new persistent memories. However, there's plenty of innovation aimed at the use of DRAM. The Hybrid Memory Cube approach -- and proprietary offshoots -- points to CPU/memory complexes where some of the DRAM and possibly nonvolatile flash and Optane storage will attach to the processor with low-power, ultra-fast parallel datalinks. First, instances will likely be limited to about 32 GB of fast DRAM, though this will grow -- which implies a second, slower DRAM layer with NVDIMM will be connected on the same bus.

There are multiple uses for this new, faster DRAM layer. It can provide a save space for registers and system memory that allows for fast recovery from a reset event. It can be a cache to the larger DRAM space and increase DRAM performance. Some of the fast DRAM will be turned into a RAMDISK-like space for volatile system variables.

Other forks in the software-defined memory technology map suggest that memory will eventually become a peer module to CPUs, GPUs, LAN adapters and drives -- all on a fabric internal to each server. We'll see more of this in 2018.

Next Steps

Expanded SSD drives more data center transitions

Track the evolution of data center storage architecture

Hyper-converged infrastructure, big data blur the compute-storage line

This was last published in August 2017

Dig Deeper on Enterprise data storage strategies

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What further advancements do you hope to see in the memory market moving forward?
Cancel

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close