Keep your server sharp and upgrade memory techniques
A comprehensive collection of articles, videos and more, hand-picked by our editors
Flash-based storage devices can boost the performance of storage-intensive applications at the server, but there...
are challenges that cannot be overlooked.
Although traditional SATA, SAS and Fibre Channel hard drives boast reliable and proven storage architectures, they are at least an order of magnitude slower than system memory, and reading and writing to the storage array is multi-step and latency-ridden. Standalone solid state drives (SSDs) replace rotating media with flash memory, but data still moves to and from the storage subsystem across an Ethernet or Fibre Channel SAN.
Flash-based in-server storage devices overcome storage bottlenecks, concentrating storage close to the CPUs without traditional performance barriers. It also might fall outside of your traditional storage management and data protection tools.
Storage latency problems
Conventional rotating magnetic disk storage creates bottlenecks at the device and system level.
A high-performance disk drive imposes 2 milliseconds average latency to reach the track and sector containing data. With data fragmented across a disk, each file can impose multiple latency delays.
The disk's interface is another possible bottleneck. Consider a modern 15,000 RPM Seagate Cheetah 15K.7 SAS disk drive rated for a top data rate of about 600 MB per second, compared to the 64-bit DDR3 DRAM modules operating at 1,333 megatransfers per second on a Dell PowerEdge 720 server, which can move more than 5 GB per second between the CPU and memory.
Storage data requires a lot of movement within the enterprise, which is where system-level latency occurs. For a given application, the server CPU moves data to a network interface card for Ethernet SAN storage or to a Fibre Channel host bus adapter for Fibre Channel storage-area network (SAN) storage.
A typical Ethernet local area network moves data at 1 Gbps, and probably supports more traffic than just storage. A Fibre Channel network is dedicated for storage, but most Fibre Channel SAN deployments are limited to 4 Gbps. Both network options move data slower than even a SAS interface, and they pass data through network switches, which introduce more latency on the path to the storage array. At the storage array, data passes through a RAID controller then disperses across several disks within the RAID group.
When an application needs data from the storage array, these steps -- and delays -- occur basically in reverse.
Flash memory in the server
SSD storage arrays bypass these latency traps, but they still rely on standard drive interface ports -- such as SATA or SAS -- with disk controllers and protocols developed for conventional magnetic media.
Unconventional flash-based storage within the server eliminates SATA/SAS protocols and lets the server exchange application data with storage using direct memory access (DMA) techniques. Local flash devices suit mission-critical, storage-centric enterprise applications such as real-time transactional and data analytics workloads.
The Fusion-io ioDrive is one flash-based storage option. It fits into a standard PCIe expansion slot and appears to the host server as a new nonvolatile memory tier rather than a conventional storage device. Without SATA/SAS protocols, the server exchanges application data with the storage via DMA. The 600 GB ioDrive imposes read/write latency of less than 50 microseconds and provides bandwidth greater than 1.3 GB per second directly to the server's PCIe slot.
The application and operating system see flash memory as just another conventional block-based storage device. Flash-based local storage won't change the server's main memory; applications and virtualized workloads still load and run in memory. But storage activities redirect to the local flash-based storage card instead of routing to the external SAN, making application storage interact directly with the server's processors.
Caveats to flash-based storage
Although cost is always an important issue, the biggest caveats with flash storage technologies are logistical. Installing an ioDrive or similar device will present decentralized storage off the SAN. If your organization invests in centralized storage management, putting drives onto individual servers is counterintuitive.
Consider system compatibility with the unconventional storage. While the ioDrive is compatible with most current servers with an available PCIe 2.0 4x slot and major operating systems (OSes), this isn't always the case. Ensure that the target server runs a supported OS now and test future OS versions thoroughly for compatibility with the storage device prior to any upgrades.
The flash memory storage devices must be managed and provisioned to various applications, and they may not integrate into existing management infrastructures or remote management tools. This means individual server service wherever you deploy flash-based storage, even in remote data centers.
Flash storage devices also affect system backup and disaster recovery (DR) planning, falling outside the centralized SAN yet supporting critical, high-performance applications. Even if you deployed multiple storage devices in a RAID configuration such as RAID 1 for DR, this would add deployment costs and system hardware requirements.
Careful lab testing and proof-of-principle projects should guide your local flash-based storage implementation, identifying potential problems before putting the storage units to work in production servers. And make sure you understand how flash storage works before investing in the technology.
Stephen J. Bigelow asks:
Have you considered flash memory for your servers? Why or why not?
0 ResponsesJoin the Discussion