zhu difeng - Fotolia

Manage Learn to apply best practices and optimize your operations.

Data center storage architecture moves toward software-defined memory

New IT developments converge storage and memory into a hybrid approach. Consequently, the idea of software-defined memory starts to become more of a reality.

This article can also be found in the Premium Editorial Download: Modern Infrastructure: Staring down the security issues in cloud computing:

Software-defined storage is only starting to gel, but the pace of systems evolution is such that the next innovations are already coming into focus.

We are not talking about pools of disks here, or even solid-state drives. The future of storage lies in its convergence with memory. System memory is becoming more complex with the introduction of nonvolatile dual in-line memory modules (NVDIMMs) that combine the speed of memory with the persistent qualities of data center storage architecture.

These products are already available. Micron has the first type of all-flash NVDIMM in production, and several vendors are offering servers with this hardware. The advantage, of course, is that data moves on the memory bus at much higher speeds than on Peripheral Component Interconnect Express (PCIe), though the NVDIMM flash is still quite a bit slower than dynamic RAM (DRAM).

There are cases, such as in military systems or financial services, where memory persistence needs to be higher. Viking Technology created a version of NVDIMM that includes a combination of a large DRAM space matched with flash. When a system is powered up, the user has the option to load data from the flash into the corresponding DRAM. If power is lost or the machine stops, the DRAM is backed up to the flash.

The power of the Viking approach is that the system can write data to the DRAM using CPU register-memory commands. This allows single-byte write operations, instead of the 4 KB file I/O block used in traditional storage operations and in the all-flash type of NVDIMM. This byte-mode I/O is thousands of times faster than block access to flash. The software to support this capability is complex, involving not only OS changes to handle exceptions but compiler extensions as well. Since this type of I/O doesn't use the standard block method, applications need to be modified.

We can expect to see real-world applications of the hybrid approach in late 2017 as the software changes emerge. Most likely, database systems will be the first instances, where all the changes are implemented by the database vendor, making a transparent platform for the end user.

Alternatives to flash -- such as Intel and Micron's 3D XPoint, Hewlett Packard Enterprise and SanDisk's Memristor, and Sony and Viking Technology's ReRAM -- will improve speed ratios between the persistent and non-persistent memory segments. Even so, these will remain significantly slower than today's DRAM.

The key for acceptance of these technologies is that the app needs to see them as either a DRAM-like byte-addressable space or as a block-I/O drive. Without this, the extra speed is lost in application software overhead. The fact that these are multi-company efforts points to the challenges and complexities involved in bringing these bleeding-edge technologies to market, so don't expect these products until late 2017 or early 2018.

Solid-state drives (SSDs) are getting much faster and more compact, with 10 million IOPS drives already shipping and 100 TB capacities on the horizon. Clearly, we are facing an explosion of storage choices, which will bring a tremendous boost in performance. This also creates confusion in the data center storage architecture space.

NVDIMM and SSD complement each other. NVDIMMs will hold terabytes of flash -- once 3D NAND die are integrated later in 2017 -- and use an OS driver to make the flash space look like a drive. The largest NVDIMMs on the near horizon, however, will likely top out at 5 TB, just a bit bigger than the smallest SSDs.

Redefining memory

This is where software-defined memory (SDM) comes into its own. If memory is persistent, why not use it to complement other persistent storage? The theory of SDM is to treat all memory and storage as a service and rely on software tools to move data across physical devices. As an example, a memory object might be created in DRAM, but a part of it that's rarely accessed might move down the memory stack to persistent dual in-line memory module (DIMM) and then out to an SSD.

SDM is still in its infancy, so the software being developed is tied to hardware from vendors such as Intel or Diablo Technologies. This will likely change, since most of the code -- except for drivers -- should be capable of running on a mix of hardware.

If we apply data services, such as compression, this clearly is quite a big step up in sophistication. That compression not only increases the effective space of the persistent NVDIMM by five times, but the time to transfer it to memory is reduced by a similar factor. Data services -- such as replication or erasure coding, encryption and indexing -- are possible even on data stored on NVDIMMs.

This extension of the software-defined storage model to all memory structures could spur some interesting architectural advances. The Hybrid Memory Cube (HMC), a concept promoted by an eight-vendor consortium, aims to replace the traditional parallel bus to the DIMM with a low-power serial bus system, with many parallel channels. These channels are like the individual PCIe links in today's server.

HMC processors will have an extra layer of level four cache (perhaps up to 32 GB), made up of very fast DRAM, which will boost performance dramatically. It would also have large off-module DRAM spaces from the CPU, which will include nonvolatile memory. With this, the effective system memory space becomes huge -- even before compression. This concept is moving toward reality in the second half of 2017.

Another innovation, likely to debut in 2018, is the move to make memory and SSD storage pools equal peers in the cluster. This allows data to move directly from persistent system memory to SSDs or to other cluster nodes, reducing internal bus loads by large factors. This flow, as in SDS, can be controlled by data services running in virtual machines or containers.

The key to SDM lies in the data service tools that vendors are developing. These vary by the type of nonvolatile DIMM.

Hardware vendors drive innovation

In addition to OS updates, new driver approaches are also appearing. These improve performance opportunities of byte-addressability and block mode operation. Western Digital's SanDisk unit, for example, has a nonvolatile memory file system to accelerate legacy block I/O apps and auto-commit memory that supports byte-addressable nonvolatile spaces. These should reach down into the nonvolatile memory-attached SSD pool in a server, too, effectively making the in-server memory a single entity with auto-tiering between layers.

Intel, preparing for Optane -- their version of 3D XPoint -- plans initially to release SSDs with a block I/O interface. Once NVDIMMs become available, the roadmap becomes much richer, with software-defined memory for fabrics acting as an inter-server clustering connection and SDM for storage-class memory aimed at clustering the NVDIMMs. These will include byte addressability.

With these software tools, DRAM, Optane NVDIMM/NVMe SSD storage will be seen across the system as one coherent, shared memory pool. Intel claims that no OS or app changes will be needed to use these tools, but a huge memory pool with varying latencies places a burden on optimization. Intel says that the use of new predictive caching software from ScaleMP obviates latency issues.

NVDIMM-N, the hybrid type of nonvolatile memory, can be used as DRAM with very fast reads and writes. Using CPU register-memory instructions to execute data transfers brings a good deal of change in the software ecosystem. Today's OSes are not capable of recognizing when a section of memory is persistent. Vendors across the entire stack, however, are developing OS updates, new drivers and compatible applications.

In the meantime, like the all-flash NVDIMM, a vendor driver is used to present the DRAM as a super-fast storage drive, using 4 KB block I/O. Out of the box, DIMM capacity is limited to equal amounts of DRAM and backup flash memory, so the addition of micro-tiering drivers from companies such as Enmotus or Plexistor, are needed to expand the flash memory to its full potential.

NVDIMM-F, the all-flash option, uses a block I/O access mechanism and a vendor driver to present the memory to the OS as a drive. No application changes are needed, though the product's high speed points to a nonvolatile memory express (NVMe) type of approach; using circular queues instead of the SCSI I/O stack for driver design will reduce overhead and speed operations considerably.

Emerging technologies, such as 3D XPoint, change the access game considerably. These are designed as NOR devices -- as opposed to the standard NAND scheme -- with gating logic to select active cells for reading and writing, allowing inherent byte addressability. Likely, they will use a driver to handle block I/O initially and present as drives. Rapid evolution of drivers and OS features, as with NVDIMM-N, will bring direct byte access via CPU commands.

Even though these are much faster than NAND flash, they do not achieve DRAM speed, which creates a dilemma for caching. The answer might involve the HMC module architecture, which puts a large space of very fast DRAM between the DIMMs and the CPU.

SDM is an emerging technology in data center storage architecture, with both hardware platforms and SDM software approaches rapidly developing. This year will be one of frenetic change, and we can expect to see some stumbles, but mainstream adoption is not far away.

Next Steps

The evolution of flash storage systems

Learn more about the Hybrid Memory Cube                    

Will flash storage technology replace NAND flash?

This was last published in March 2017

Dig Deeper on Enterprise data storage strategies

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

When do you expect software-defined memory to become a reality?
Cancel

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close