Problem solve Get help with specific problems with your technologies, process and projects.

Data center storage trends 2010: What's hot and what's on the horizon

It's been a busy couple of years in the advancement of IT storage systems technologies. In this tip, storage industry experts comment on hot data center storage trends that data center admins should be keeping an eye on in 2010, including thin provisioning and the adoption of SSDs.

From the rising performance and falling prices of solid state drives (SSDs) to the spread of storage virtualization, data deduplication, new array configurations and faster interconnect systems, a lot has happened in the world of enterprise IT storage systems over the past couple years. So how are busy IT executives supposed to stay abreast of all the fast-paced improvements so they can make the sound decisions on future IT projects, business needs and potential technology refreshes?

A great way is to get the latest advice and critical analysis from experts who are elbows-deep in the field. To do that, we spoke with two well-known IT storage analysts to get their thoughts on what you need to have on your corporate radar right now when it comes to innovative enterprise storage systems and your business.

Three major storage trends today
Jerome Wendt, an analyst with Austin, Texas-based DCIG Inc., a storage systems review and consulting firm, said three noteworthy trends top his list of storage happenings so far this year.

First, the rapid adoption of SSDs has jumped dramatically in the storage world recently, with about 70% of the mid-level storage arrays in the marketplace now including SSDs as standard equipment, Wendt said. That adoption rate has soared very quickly. Most mid-level storage arrays last year didn't include SSDs, he said.

"When they were first introduced, SSDs were seen as just drives on an array" by the storage devices, without automatic recognition for their greater speed capabilities, Wendt said. That changed, however, when vendors found ways for their arrays to instantly use SSDs to their best advantage.

That big development is Wendt's second major storage trend today: sub-volume optimization. This feature allows use of the fastest drives for the most critical data while relegating non-critical or archived data to slower storage drives. The user can designate where each piece of data is stored for greater speed and efficiency.

That's a big twist from data storage of the past, mentioned Wendt. What it means is that those SSDs can now be used more efficiently. What's even bigger is that the storage systems are now able to do this dynamically, compared with manual data juggling of the past, he said.

Why is that important? Because it means that users can buy fewer SSDs, which still have a cost premium over hard disk drives, while taking advantage of both disk types for different needs, Wendt said. It essentially makes storage systems smart enough to recognize active, critical data and move it automatically to the correct storage drives.

These breakthroughs are right at the edge of new storage developments, Wendt said. "This is all really happening in the last three to four months," from vendors including 3Par and Stellent Inc. EMC Corp. is also joining the fray this summer with its newly announced CX4 storage systems, he said. "Not every storage provider has it yet, but I see this becoming a huge differentiator in the marketplace. It's almost going to have to be a must-have feature for the midrange. Sub-volume optimization is really becoming the differentiator."

Wendt's third major storage trend in today's marketplace is thin provisioning, which allows additional storage capacity to be held in reserve by the operating system until it's needed. "I just see it becoming much more important going forward on storage arrays," said Wendt. "It doesn't require you to keep adding storage."

One other hot topic to watch in storage, according to Wendt, is the fledgling use of deduplication in storage systems. "It's an interesting concept and people are talking about it, but I still think it's a year out for primary storage use," he mentioned. The lingering problem is that it requires a lot of processing of files that are accessed often, which slows things down and complicates the tasks, Wendt said. Deduplication for data backup is a well-established market, allowing users to only back up new data while skipping data that's already been stored, saving time and storage space. But the challenges of using it to help organize primary storage arrays are greater, he said. "It's a much more complicated process and it gets really pricey. I don't think the value proposition is there yet."

Russ Fellows, a storage analyst with The Evaluator Group of Greenwood Village, Colo., said one of the things he anticipates will change the most in the future is where companies choose to locate their stored data -- internally or externally. While some experts talk about this in terms of internal or external cloud storage, Fellows prefers to call it "IT as a Service" because cloud terms mean different things to different people.

"What you're really trying to do is enable service organization," Fellows said. "It's really more of a matter of figuring out what things should be delivered internally and which should be delivered externally. The larger you are, the more likely you should have them internal [for security]. The smaller you are, the more likely you should have them external" for cost savings to cut down on the need for expensive storage systems.

Storage nowadays very much centers on evolution, Fellows said, as vendors find ways to improve and expand existing storage technologies. Even the recent popularity of SSDs in storage arrays is part of an evolutionary trend, he added.

By automating the placement of critical data on SSDs for faster access, companies are seeing that they can "tier" their data storage based on varying needs. Because it wasn't efficient for IT staffers to manually move the data as happened in the recent past, the automated systems evolved from real IT needs, Fellows said.

That kind of evolution will continue in storage, Fellows predicted. "In 10 years it will all be types of flash SSDs," with no more traditional disk-based hard drives shipping, he said. Within six or eight more generations of storage systems, there will be no more spinning storage platters, he mentioned.

According to Fellows, another evolving technology that will continue to grow and mature for enterprise IT storage is Fibre Channel over Ethernet (FCoE) as an interconnection system. "A lot of vendors are pushing it hard, particularly Cisco Systems."

But while it's intriguing, Fellows said, it's still evolving and is not quite ready for widespread use today. "[Cisco] saw benefits because it is simpler and requires less cabling. That's true, but all the problems aren't worked out yet."

One existing shortcoming with FCoE today is that you can't yet reliably daisy-chain systems and switches together, which needs to happen for it to thrive as an interconnect, Fellows said. "Expect those issues to be worked out in two years or so to look at FCoE."

ABOUT THE AUTHOR: Todd R. Weiss is an award-winning technology journalist and freelance writer who worked as a staff reporter for from 2000 to 2008. He spends his spare time working on a book about an unheralded member of the 1957 Milwaukee Braves and watching classic Humphrey Bogart movies. Follow him on Twitter @TechManTalking.

What did you think of this feature? Write to's Matt Stansberry about your data center concerns at


Dig Deeper on Enterprise data storage strategies

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.