BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
Now that solid-state drives have begun to decrease in price, manufacturers tout them as one way to boost overall data center performance. SSD performance can surpass the fastest mechanical hard drives, but simply replacing legacy drives with SSDs might not yield any improvement. How do you make sure SSD performance is worth the investment?
SSD performance may choke on bottlenecks
As an administrator, you’ll have to look at the potential benefits of solid state drive (SSD) performance before making the switch. The best way to assess an SSD performance gain that an upgrade might yield is by identifying I/O-dependent servers that might be good candidates for solid state storage. Once they’re identified, use the Windows Performance Monitor to identify the server’s bottleneck.
Every server has a performance bottleneck — the one piece of hardware that is slower than everything else in the system and limits the server’s overall performance. For servers running I/O-intensive applications it is easy to assume the hard disks are the problem. Hard drives are mechanical devices and run far slower than other system components. Often, however, another system component is responsible for the bottleneck, especially for servers that are connected to high-performance storage arrays. In these situations an investment in solid state storage might be wasted unless the server’s non-hard-disk bottlenecks are also addressed.
More on data center storage
Five questions about Solid State storage
Ask the Expert Podcast: storage energy use
While using the Windows performance monitor to search for storage bottlenecks might seem like a simple task, the centralized nature of data center storage can complicate the process. For example, one of the primary Performance Monitor counters used to evaluate disk performance has always been the average disk queue length (ADQL) counter. Microsoft recommends this counter never exceed 2. If it does, it means the hard disk may be too slow. The problem with this recommendation is that it is based on the assumption that a single disk is being evaluated. If an organization is using SAN storage, it may be impossible for an administrator to know how many physical disks a server is actually using.
Even if the administrator does have an accurate SAN mapping, the ADQL counter can be misleading. Let’s say the counter value is 6. If you only consider Microsoft’s recommendation to keep the counter below a value of 2 then this would seem like a big problem. However, if the volume spans five drives, then you would need to divide the disk queue length of 6 by the total number of drives, 5. The ADQL per drive would only average about 1.2, well within Microsoft’s guidelines.
Examine response times for clues
Because it is often difficult to know how many physical drives are being mapped to a volume, it may be more effective to look at the avg. disk/sec counter instead. This counter allows you to monitor the time taken for read, write and transfer operations. Response times alone won’t tell you whether or not you will benefit from an upgrade to solid state disks, but it will give you a hint as to the system’s health. If you are seeing response times in the 5 to 10 millisecond range then the server is experiencing decent performance. That isn’t to say that solid state drives might not improve the server’s performance – just that there are no major storage performance problems.
Use tools to analyze storage traffic
If you see high disk response times — greater than 10 milliseconds — it’s possible that your existing hard disks are having trouble keeping pace with your server, but it is also possible that you are suffering from network latency issues instead.
For example, in Windows operating systems, the Windows Performance Monitor is limited in its ability to determine whether a performance problem on SAN storage could be resolved by replacing mechanical hard disks with solid state disks. This has to do with the layered approach that Microsoft uses for storage I/O.
There are a number of components that are used in the Windows I/O stack. The lowest level at which it is possible to monitor storage performance in Windows is at the port level which manages a specific transport. The port hands I/O operations off to the miniport driver, which is a vendor-supplied component specific to the underlying hardware. As such, the Windows Performance Monitor is unable to monitor the stack at the miniport level.
If you suspect your Fibre Channel connectivity might be the source of the bottleneck then you will need to use a third-party solution such as those provided by Fibre Channel Technologies or Network Instruments to analyze your Fibre Channel communications.
Analyzing storage traffic will tell you whether or not your Fibre Channel links are suffering from bandwidth saturation or latency issues. If you determine the server’s storage connectivity is the issue, then you may be able to improve storage performance by installing additional host bus adapters.
There is no denying SSD performance is better than what is available with mechanical hard drives, but the potential gains may remain unrealized unless storage connectivity can keep pace with the storage array.
ABOUT THE AUTHOR: Brien Posey is a seven-time Microsoft MVP with two decades of IT experience. During that time, Posey published thousands of articles and wrote or contributed to dozens of IT books. Prior to becoming a freelance writer, Posey served as chief information officer for a national chain of hospitals and healthcare facilities. He also worked as a network administrator for some of the nation’s largest insurance companies and for the Department of Defense at Fort Knox.