adimas - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Flash runs past read cache

Just because you can add a cache doesn't mean you should. It is possible to have the wrong kind, so weigh your options before implementing memory-based cache for a storage boost.

This article can also be found in the Premium Editorial Download: Modern Infrastructure: Liquid immersion cooling surfaces in the server market

Can you ever have too much cache?

As a performance optimizer, cache has never gone out of style, but today's affordable flash and cheap memory are worn by every data center device.

Fundamentally, a classic read cache helps avoid long repetitive trips through a tough algorithm or down a relatively long input/output (I/O) channel. If a system does something tedious once, it temporarily stores the result in a read cache in case it is requested again.

Duplicate requests don't need to come from the same client. For example, in a large virtual desktop infrastructure (VDI) scenario, hundreds of virtual desktops might want to boot from the same master image of an operating system. In a cache, every user gets a performance boost and saves the downstream system from a lot of duplicate I/O work.

The problem with using old-school, memory-based cache for writes is if you lose power, you lose the cache. Thus, it is used only for read cache. Writes are set up to "write through" -- new data must persist somewhere safe on the back end before the application continues.

Here comes flash

Flash is nonvolatile random access memory (NVRAM) and is used as cache or as a tier of storage directly. Although 10 times slower than dynamic RAM (DRAM), it's much faster than hard disk. Its speed and persistence allow you to write into a flash-based cache quickly. As a persistent cache, it leisurely "writes back" to permanent bulk storage whenever it's convenient.

Caching can be distributed across a cluster to provide reliable protection for new data by making fast cache copies spread across multiple systems. This replication-based scheme pools and then shares large total cache capacities made up of flash and/or memory.

Some caching vendors to consider include Atlantis Computing's memory-based ILIO Persistent VDI, PernixData FVP for pooled server-side flash and memory acceleration, and Infinio's drop-in distributed virtual server RAM-based cache.

Prices are dropping and densities are increasing in both flash (NVRAM) and memory (DRAM). Cache can be tiered just like storage, using both memory and flash together, so will larger memory investments move into the flash market?

Taking cache to the bank

Where should cache be located for maximum impact?

Users receive the largest boost if the cache is close to the client application on the server host. But if you move the cache down the stack and out into the network (e.g., Astute Networks) or further into a storage array, then the cache capacity and its contents can be more widely shared.

Chips also have multiple levels of built-in processor cache, which grow larger with each generation. Significant amounts of cache might show up in network adapters and might even be built into hard disk drive enclosures.

As much as some vendors are trying to turn certain components like servers and adapters into commodities, others will try to innovate. The performance offered by a cache is a big differentiator.

Today, all-flash arrays are popular, so consider whether you need multiple levels of flash cache if you have an all-flash array underneath. This approach is challenging because schemes with multiple levels of write back cache are vulnerable to data loss or corruption in the event of an outage.

If you invested in a lot of memory or flash cache, consider saving with cheaper, slower, larger disks on the backside. Some caches help bundle small random write I/O into larger, more efficient blocks -- the kind found in dense virtual machine clusters.

Space saving cache

Caches, like storage, can be deduplicated to make them larger. If data is deduplicated in storage and cache -- and even into memory or down into archives -- you can recover several resources. HP StoreOnce and Oracle Hybrid Columnar Compression provide examples of global dedupe and lifecycle compression offerings.

Analyzing and tuning cache by hand is a tough job -- you have to look at low-level behavior and correlate it with time-dependent cache hit/miss ratios, allocated cache amounts, cache locations, levels of cache and more.

Caches get "warmed up" by predicting what might be needed based on history. Smart caches predict what to exclude to make room for hot data, what created data to keep and even how big they need to be.

This was last published in October 2014

Dig Deeper on Enterprise data storage strategies

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close