Nmedia - Fotolia
There are numerous storage benchmarking tools available that are capable of testing I/O performance on Linux operating systems. Five tools and commands have emerged to be extremely popular among IT professionals.
The Linux hdparm tool enables administrators to establish a basic, low-level measure of disk performance. Using hdparm with the -t option takes advantage of the Linux disk cache, while the -t option also accesses the disk through the cache, but doesn't pre-cache the results. Low-level Linux storage benchmarking tools such as hdparm are very sensitive to file systems and other higher level constructs, however, so results can vary dramatically.
Admins often use the Linux dd -- data duplicator -- command for tasks such as backup and copy, but its interaction with storage can also enable sequential throughput for storage performance.
Flexible I/O Tester (FIO) is perhaps the most versatile and popular tool for benchmarking hard disk drive and solid-state drive devices. It enables administrators to run sequential read/write tests with varied I/O block sizes and queue depths.
The Sysbench benchmarking utility is intended for more general purpose use. It can test processor, database and file I/O (fileio) performance. It is this fileio test that helps check disk I/O performance through sequential/random read/write testing, while adjusting I/O block sizes, synchronous and asynchronous I/O, and other disk behaviors. The fileio test in Sysbench is simpler and has fewer options than tools such as FIO, however.
The IOzone command-line tool supports a wide range of I/O test types, including write, read, backwards reads, random mixes and variations, including fwrite, fread, pread, pwritev and preadv. Administrators can specify file sizes and block sizes for testing.
This list is certainly not complete. Other IT systems-related tools, including homogeneous management tools and comprehensive heterogeneous systems management frameworks, may also offer benchmarking capabilities across major hardware subsystems -- including storage I/O.
Additionally, any discussion of storage benchmarking tools would not be complete without a reminder of their limitations.
Differences in storage I/O testing tools
All storage benchmarking tools are not created equal. Different tools typically specialize in particular aspects of performance.
For example, some tools emphasize I/O performance for specific disk I/O attributes -- such as storage reads, writes, random access, sequential access, latency, throughput, I/O block sizes -- or a mix of attributes. Other tools may step back from disk hardware and deliver results that focus on the performance of the disk's file system, or how excess storage traffic interacts with system RAM (caching).
Consequently, organizations typically select storage benchmarking tools to test those performance attributes that are most important or interesting to the business. For example, the performance of sequential reads from storage would likely affect a server intended to deliver streaming media, while a general file server might emphasize the performance of random disk reads/writes.
It is also commonplace to employ multiple tools and compare results when evaluating storage subsystem performance. All benchmarking tools are basically software, and the performance figures and other results they produce will inevitably be skewed by the way that software is written, the system hardware on which that benchmark runs and the other software running concurrently on the system. The same software, testing the same storage subsystem, will probably deliver markedly different results when executed on a different server -- or with a different amount of software competing for compute and I/O resources.
An administrator might use multiple Linux storage benchmarking tools, such as FIO and Sysbench, and compare the results. If the results are consistent -- not necessarily the same -- then the performance figures are likely trustworthy. If the results vary wildly, it may be necessary to investigate and understand the differences before accepting the accuracy of any benchmarking results.
Dig Deeper on Enterprise data storage strategies
Related Q&A from Stephen J. Bigelow
Eliciting performance requirements from business end users necessitates a clearly defined scope and the right set of questions. Expert Mary Gorman ... Continue Reading
Requirements fall into three categories: business, user and software. See examples of each one, as well as what constitutes functional and ... Continue Reading
Navigating data center malfunctions when hardware is off premises can be tricky. Organizations must have strong SLAs with their colo provider to ... Continue Reading