Shifts in business, SSD and cloud create data-tiering confusion

With how quickly business data and the storage systems that hold it can change, data-tiering adoption has lagged -- and new options are on the scene.

New storage options are blurring the delimiters once evident among different data types.

Data tiering, which started as a fairly straightforward concept, has become quite convoluted. Which means enterprises might find it harder to stage data.

Data tiering has been a touted strategy for years, because different data groups have different values to the enterprise. "Not all information needs a Mercedes-level storage system," said Forrester Research senior analyst Henry Baltazar -- and the cost savings are significant.

Storage tiers place the highest-valued, highest-performance-driven data on the most costly storage drives. Less time-sensitive data migrates to lower-cost hardware. Rather than pay for top-of-the-line storage across the board, corporations mix in lower-cost alternatives.

Three storage tiers are common:

  • Tier 1 is fast, expensive storage, such as Fibre Channel drives. These storage systems are often reserved for important complex applications, such as database management systems, and include sophisticated scalability and reliability functions.
  • Tier 2 storage is slower, less expensive disk systems supporting applications, such as email. Companies invest less in making sure that this data is available 24 hours a day, seven days a week.
  • Tier 3 takes care of backup and recovery systems, usually stored on low-cost disk or even tape drives. Here, information recovery can take hours or days.

Problems with data tiering

Data tiering never really took off. "Frankly, fewer companies have adopted tiering than [storage] vendors would like to admit," said George Crump, president of Storage Switzerland, a consulting firm.

The data-tiering process requires IT departments to do a fair amount of up-front work. Corporations evaluate the value of their data, examine their information flows and determine what data belongs where -- which all takes time. Given the dynamic nature of business, these deductions must be updated periodically -- say, every six months. That's bad news for already-harried data center staff.

Creating tiered storage is also manually intensive. Allocating storage requires IT technicians to touch a lot of devices. But this is changing.

"The data migration tools vendors offer have become more sophisticated and offer much more automation than they did in the past," said Mark Peters, senior analyst at Enterprise Strategy Group. This is cutting down on data migration time.

While automation has improved the tiering process, other recent technical advances blur the distinctions among the three tiers. "Solid-state drives (SSDs) are now gaining significant momentum in the enterprise," Peters said.

Typically, SSDs are the top tier because their performance is lightning fast. But they're expensive -- sometimes 10 times the price of other storage products. Some enterprises mix SSDs with other high-cost, high-performance storage arrays to lessen the sticker shock. You will see SSD called Tier 0 and Tier 1½ to convey how high-value/high-cost a solution it is.

Clouds rolling in

"I’ve been surprised that SSDs have begun to play a very significant role in archiving, backup and recovery applications," Baltazar said. SSDs are used in cloud storage because enterprises are backing up high-bandwidth media, such as pictures and video, that tax traditional backup storage solutions. Cloud storage vendors are also adopting the technology to satisfy speed requirements as enterprises move information from their offices to the cloud.

Cloud computing is having a significant effect on data tiering. In a growing number of cases, corporations are using cloud as a Tier 3 option to back up their information, instead of using cheap physical storage.

User behavior also muddles neat data tiers. Employees open cloud accounts and store company documents there. "Increasingly, businesses are being forced to put policies in place to determine how to deal with user-stored data," Crump said.

Enterprises also have to take into account market flux when planning a storage scheme. SSD pricing has been dropping by double digits in the past few years. The underlying economics of a data-tiering plan can quickly become outdated.

Corporations have more choices for stacking their storage systems than ever before -- to the point of having too many options. To make good decisions, enterprises may call in outside consulting services and pay for training courses for their employees.

If you can slog through the short-term confusion of tiers, SSD and cloud evolution, the data center stands to benefit with greatly enhanced storage efficiency.

About the author:
Paul Korzeniowski is a freelance writer who specializes in cloud computing and data-center-related topics. He is based in Sudbury, Mass., and can be reached at paulkorzen@aol.com.

This was last published in November 2013

Dig Deeper on Enterprise data storage strategies

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

4 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Where are you on data tiering?
Cancel
Because you buy what you need for the performance you require - one size fits all disk i/o does not work with differing work loads.
Cancel
Companies not data tiering? its BECAUSE of the cost of storage that companies data tier! You wouldnt buy all tier 3 disks for your SAN when you need to run databases on it...well not unless you dont care about your db performance. I find it hard to swallow that it never really took off - seems like its the most sensible approach to the problem.
Cancel
Haven't we been doing this for 50 years? since S/360 did virtual storage in 1964. IBM wasn't the first . . . didn't Honeywell do this in 1962?

o.k., not me personally, I've only been doing this sort of thing since 1985 way too young to be an elder.

"Data tiering never really took off" rubbish. the whole mainframe industry has been absolutely reliant on it for 50 years. it's embeded in z/OS' genesis. You can't have a functioning general purpose mainframe without it.

Maybe 'data shops that needed tierd data structures were already using them so didn't buy the sales hypeware?'

What I think's happened is we've seen another aspect of the short memories of humans. Real enterprise maturity data shops won't even blink at data tiering, they wouldn't be able to open their doors without it.

Some of the smaller shops that have never seen enterprise computing probably need to grow up a bit and learn from the last 60 years of the industry, but there's nothing new there either.

What this 'new' stuff does is remove the knowledge of where the data is stored in its various tiers from the mainframe's catalogue to the storage device and make the technology more available to the bitty boxes.

The catch is going to be that the system's compute is going to have to occasionally tolerate high latencies - three or four orders of magnitude slower.

The danger is that the OS doesn't know what tier the storage is in, so doesn't know at the time of the data fetch that it's going to be a while, whereas the 'tiers' in mainframe are known and the latency can be calculated ahead of time. The storage array may well do statistically based locations and migrations, but there will always be quarterly, yearly and ad hoc jobs that are outliners, they will be waiting a while unless you use processes to recall that data ahead of time. (schedule a batch job in the deck to issue an specific HSM recall - takes about three lines of JCL if you know what you're doing) (but I'd have to talk to the guys, been a few years)

If you're having problems with your data tiering strategy or execution, stump up the dollars and find an old mainframe storage technician or mainframe systems programmer. We'll probably look at you with a blank expression wondering why there is even a question. something like "you mean you'd consider a system that *doesn't* do that?"

Some of us could do with the work and for us it's "Been there, done that, worn out the lab coat."
Cancel

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close