jules - Fotolia
Solid-state drive storage has gained traction as enterprises seek faster and more reliable storage for top-tier...
applications. But the rise in solid-state drive deployment has also spawned a maintenance dilemma for IT professionals.
Although solid-state drives (SSDs) and hard disk drives (HDDs) do exactly the same job, they employ different technologies. Here are some of the most important tactics to optimize SSD performance and longevity.
Disable defragmentation, indexing and hibernation
Although defragmentation is a popular HDD performance enhancement, it's best to disable the feature with SSDs. An OS file system divides disk storage capacity into small units called clusters, or allocation units. Disks up to 16 TB use 4 KB clusters, so a small file would use only one cluster. However, most files are stored across multiple clusters, and large files may involve many clusters. The operating system assigns clusters as needed.
As you add more files to the disk, and those files shrink and grow, the clusters involved in these files can gradually scatter across the disk. Although this is not directly harmful to the disk, fragmentation forces the mechanical parts of an HDD to work harder to locate the tracks and sectors containing each cluster. As a result, it reduces HDD performance and potentially lowers the disk's longevity. The operating system's defragmentation tool, "defrag," reorganizes file clusters so that the clusters of every file are contiguous on the HDD. This minimizes mechanical delays in searching for scattered clusters, aids performance and reduces unnecessary mechanical wear.
OS file systems will format SSDs to use clusters in a similar manner to HDDs, but SSDs have no mechanical parts, so fragmentation has no practical impact on SSD read/write performance. This means defragmentation offers no benefits to optimize SSD performance. Further, an SSD uses nonvolatile memory (NVM) components. NVM devices only offer a finite number of erase/write cycles, so defragmentation would actually drive up the number of unnecessary NVM writes.
Indexing is a Windows service that maintains a database of the most frequently accessed files to accelerate Windows search performance. Indexing triggers many small writes to maintain the list of files. Any time you create, change or delete a file, the system will perform additional writes on the index. As with defragmentation, SSDs do not benefit from file indexing, and those additional writes can potentially reduce SSD longevity.
Finally, hibernation is a Windows power-conservation mode that captures the computer's state and saves it as a disk file. This allows the system to power down completely, yet restart and resume its previous state quickly. The challenge with hibernation is capacity optimization rather than performance or longevity preservation. For example, there are few practical reasons to hibernate a server. Servers typically run constantly and do not enter power-conservation modes. Since SSDs generally offer less raw capacity than HDDs, it's best to disable hibernation.
Learn which file types are well suited for SSDs
SSDs are incredibly good at reads. Where HDDs suffer mechanical delays, SSDs have no such delays and can access data from anywhere in its NVM store. A modern SSD can perform a random read about 100 times faster than a typical HDD and deliver sequential reads more than twice the speed of comparable HDDs -- though that varies based on SSD design. Further, reads do not stress NVM storage cells the same as writes, so an SSD can deliver reads indefinitely.
This makes SSDs a terrific choice for data that is read regularly and rarely written. Examples of this include application and virtual machine files, along with rarely changed data, such as image files, PDFs and other static media.
Page files and SSDs
A page file, or paging, is a protective technique that uses disk storage to supplement physical memory in a computer and includes swapping content to and from disks. This results in some workload performance degradation, because disk access is much slower than memory access.
One way to address that degradation is to place page files on SSDs rather than HDDs. The tradeoff is performance versus longevity; experience suggests that page file usage is primarily small reads and far fewer large writes -- making SSDs generally suitable for page file use.
While SSDs are great with reads, they can potentially struggle with writes. For example, SSDs can experience delays when faced with write bursts. The result is that some workloads that rely heavily on infrequent, intensive writes might not perform as well with SSDs.
Know when to use the write cache
Caching is a common feature in magnetic HDD and SSD devices. Drive media often cannot keep pace with the data rates possible across the drive interface, so a server can wind up waiting for the storage device to catch up during writes and even reads -- especially during heavy-storage operations. Consequently, the applications performing storage writes and reads can experience delays.
To alleviate delays and optimize SSD performance, add some high-speed memory to the device. Place dynamic RAM (DRAM) inline as a buffer between the drive interface and the drive media. DRAM is volatile, meaning it will lose its contents if drive power fails, so the write cache uses a mix of techniques, such as cache flushing and Native Command Queuing, to intelligently organize and commit cache data onto the media.
It may seem counterintuitive to disable a performance-enhancing technology, but there are cases when it's appropriate to disable the write cache. For example, administrators may elect to disable a write cache when the integrity of write commits is more important than the sheer write performance of the drive.
When you delete a file on an HDD, the system won't actually erase the clusters that compose it -- rather, it marks them as "free." Then, the HDD's magnetic media can overwrite those clusters as new data is stored in them. An SSD doesn't work this way.
Instead, it stores data within the cells of NVM devices. The NVM cells are grouped into "pages" of 4 KB to 16 KB, and those pages are organized into "blocks" of 128 to 512 pages. When NVM cells are empty, they can be written quickly, so write performance can be extremely good. But once the system writes the cells, it must erase the entire block before it can rewrite any page of that space. The time needed to erase an entire block can slow subsequent write operations in an SSD. This troublesome SSD behavior is called "write amplification."
To alleviate the write amplification issue, there is a pre-emptive erasure feature called TRIM in the ATA command set and UNMAP in the SCSI command set. The idea is that an operating system such as Windows can oversee which blocks are no longer being used and use the TRIM command to allow the SSD to erase the unused block pre-emptively before the OS tries to store new data to that block. When the system tries to store data to that block again, it has ideally already been wiped in the background and does not need to be erased first. This can optimize SSD write performance as you use the drive's capacity.
Open up storage possibilities in the data center with SSDs
Transition in data center due to growth of SSDs
What's the best SSD option to improve server storage?