What you need to know about memory channel storage

MCS is changing the way memory and storage mix. Integrate this storage technology onto servers to enhance big data and other intense apps.

A new technology is blurring the traditional line between server memory and storage -- one that easily fits on most existing servers.

Memory channel storage (MCS) moves flash memory devices closer to the CPU, which is beneficial for any enterprise application that is sensitive to storage I/O performance. Non-volatile flash memory offers the best combination of speed and reliability with data retention.

Without MCS, solid-state flash drives are limited by the relatively slow serial ATA or Serial-Attached SCSI  drive interface, or the server's PCI Express bus. These devices' firmware and drivers cause the operating system to regard them as storage, even though they're memory.

With MCS, the flash memory devices go on the DDR3 bus, a high-performance, low-latency interface for moving huge quantities of data between CPUs and memory. This adds storage elements to the dual in-line memory module (DIMM) sockets. MCS devices include Diablo Technologies' Memory Channel Storage product, SanDisk's ULLtraDIMMs, and the similar IBM eXFlash DDR3 Storage DIMMs.

Memory controllers use multiple channels that can each be populated with modules. This boosts performance by allowing memory interleaving -- spreading access tasks over multiple channels at the same time. When MCS devices reside in DIMM slots in multiple memory channels, it allows for the same interleaving and distributed storage. Data transfers between flash memory modules (storage) and system memory modules (RAM) directly through the DDR3 memory bus.

Memory channel storage can be substantial. IBM eXFlash devices for the X6 server family are currently available in 200 and 400 GB models, and some X6 servers can support more than 12 TB of installed flash storage using MCS.

Benefits and limitations of memory channel storage

MCS zeros in on storage I/O performance, improving enterprise analytics (big data) applications, busy transactional databases and almost any sort of virtualized environment, such as server virtualization and virtual desktop infrastructure. This is because the DDR3 bus enables terabytes of non-volatile storage to communicate directly with CPUs and memory without I/O controller overhead.

But memory channel storage experiences a limited working life, expressed as total bytes written or drive writes per day (DWPD). Flash memory has only a limited number of write/erase cycles, unlike ordinary disk storage. However, DWPD isn't always a practical limitation. For example, IBM's eXFlash DIMMs are rated at 10 DWPD over a five-year lifecycle, but a 400 GB flash storage device will rarely see 4,000 GB of new writes daily.

To extend their lives, flash devices use wear-leveling algorithms that spread new writes across the entire device rather than simply rewriting the same blocks in the same place. Each independent storage module handles its own wear leveling.

Put memory channel storage to work

Data center operations can add memory channel devices to many existing servers with minor or no firmware updates. Servers just have to meet some underlying requirements and device population rules.

The server will see MCS as any other block storage device, but proper recognition and set up of the MCS modules may require a unified extensible firmware interface or Basic Input/Output System update. Some servers are specifically intended for MCS, such as IBM's X6, making setup uncomplicated. A specialized kernel driver allows the operating system to use the MCS devices without altering the OS or applications. Test MCS devices on target servers to determine compatibility and implement any necessary firmware updates before rolling out to production.

Always verify OS support when selecting MCS devices, and check for limitations on processor selection, memory types, network adapters and other issues. For example, IBM's x3650 M4 server supports eXFlash DIMMs, but it's limited to only four or eight eXFlash devices, Red Hat Enterprise Linux 6 Server x64 (Update 4), four Intel Xeon E5 models (2643, 2667, 2690 and 2697), only 16 GB of PC3-14900 1866 MHz LP RDIMMs, and an Intel X520 dual port 10 GigE network adapter.

There are also rules about the number of MCD devices allowed per memory channel, along with the allowable types and capacities of associated memory. For example, only one MCS module per memory channel is allowed, and all MCS modules must have equal storage capacity. Each channel must also include at least one registered DIMM; other DIMM types (such as unregistered DIMMs or UDIMMs) are not supported with MCS modules. Before adopting MCS, tally the memory DIMMs in the server. If your server uses UDIMMs, or if all available DIMM slots are occupied, it might not be possible to use MCS.

Existing MCS modules don't support features such as lockstep, memory sparing and memory mirroring. For more resiliency, two different MCS modules in the system can be paired for mirroring (RAID 1).

Current-generation flash storage DIMMs are not hot-swappable like many enterprise-class memory modules. This means the server will need to power down in the event of defective MCS-type devices. While RAID 1 support helps protect flash storage contents, and virtualization allows the server's workloads to migrate without downtime, there still may be an impact on high-availability systems.

This was first published in July 2014

Dig deeper on Storage concerns in the data center

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Related Discussions

Stephen J. Bigelow asks:

What applications tend to tax storage I/O?

0  Responses So Far

Join the Discussion

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close