Storage is a critical resource in the modern data center. Storage bottlenecks can potentially slow key applications...
and sour the user experience. Storage supports applications, virtual machines, data protection, user data and other vital enterprise tasks, always with the need for more volume.
We spoke with Dennis Martin, storage industry expert and president of Demartek in Arvada, Colo., about his take on storage bottlenecks in the data center.
Where do you see storage bottlenecks occurring in today's data centers? For example, are we having the most problems at network switches or in the storage array itself? Are the applications just too demanding or are there other key pain points?
Martin explains that the problems seem to be rooted in the storage arrays themselves. "The biggest bottlenecks I currently see are with the storage endpoints," he said, citing the inherent performance limitations of electromechanical hard drives as the underlying culprit. "These are electromechanical devices in what would otherwise be an all-electronic environment."
Consider a virtual environment where a tool like vMotion or VMware DRS might move virtual machines (VMs) from one server to another in order to rebalance the server workload. The problem is that the migration may move the VM to storage, where the controller is already overtaxed, leading to VM performance or stability problems that can be difficult to troubleshoot and correct without extensive testing.
As another example, imagine that IT deploys a new application on a volume that shares physical drives with the enterprise email system. If the new application becomes extremely active, the email system may be adversely affected as the drive performance is simply overwhelmed. Finally, the process of a RAID group rebuilding a failed drive can dramatically degrade the performance of the entire RAID group until the new drive's data can be rebuilt from parity data spread across the other disks in the group.
What are the best solutions to those storage bottlenecks? For example, is it a matter of a network upgrade/rearchitecting, new array deployment and more?
Martin is very enthusiastic about the use of solid-state drive (SSD) technology in storage subsystems. "We have been testing SSD technology for quite a while now and are seeing tremendous improvement in storage-access speed and performance," he said. Since SSD units are entirely electronic and contain no moving parts, the SSD can usually function without the delays of disk latency and track-to-track stepping delays that plague rotating disk media. For example, it is possible to expect a response time of 6 milliseconds (msec) for a SAS disk and 4 msec for the 15,000 rpm Fibre Channel disk, but only about 1 msec for SSDs.
Many organizations are introducing SSD storage to optimize storage performance for tier-one applications. However, easing bottlenecks in storage may really just shift the problem. "In some cases, using SSD technology, we have seen the bottlenecks move completely away from storage to other parts of the environment, such as networking," Martin said. IT staff should perform careful benchmarks of the entire storage path before and after introducing SSDs in order to identify any changing bottleneck locations.
How can data center owners stay ahead of bottlenecks moving forward? For example, is it a matter of more careful capacity planning, or are there other guidelines/best practices that you suggest?
Performance monitoring and planning are always important parts of data center management, but Martin suggests that a data center can enhance its results by matching SSD upgrades with corresponding improvements in the Ethernet or storage area network. "We have been saying for some time that SSD technology goes very well with 10 Gb Ethernet networking and 8 Gb and 16 Gb Fibre Channel storage area networking," he said. "I believe that SSDs and high-speed networks were made for each other."
However, organizations that cannot justify SSD or enhanced network performance can still make the most of their existing storage subsystems by adhering to a few best practices. For example, be aware of your physical storage, even in a virtualized environment. This means match the drive type to your IOPS needs and spread workloads out across your available disks -- running multiple mission-critical applications from the same SATA drive or drive group is just asking for storage problems.
In addition, consider short stroking the most critical drives, formatting the drives so only the outer tracks hold data. This reduces the mechanical lags and boosts performance a bit, but wastes a considerable part of the drive's potential storage space.
Finally, be sure to use virtualization-aware performance monitoring tools that will help track and report on storage and network operations across the data center so that IT administrators can spot bottlenecks and formulate the best possible solutions to storage performance problems.
Storage bottlenecks normally occur because of improper planning, unanticipated growth needs and unexpected workload-balancing behaviors, which can all degrade application performance and impair the user experience. The introduction of SSD for top-tier applications can overcome the traditional problems associated with mechanical hard drives and boost performance significantly.