Victoria - Fotolia

Problem solve Get help with specific problems with your technologies, process and projects.

Virtualization's random I/O rewrites the data center storage script

Should data center managers pine for the good old days of direct-attached storage? Or is adding a little flash the way to counter-act the fundamental storage changes wrought by virtualization?

This article can also be found in the Premium Editorial Download: Modern Infrastructure: Lift and shift vs. re-platforming cloud apps:

Virtualization changed server-based computing forever, but the effects on storage are equally profound.

Storage and virtualization expert Scott Lowe explains the trouble with storage in virtual environments in a conversation with the editors of Modern Infrastructure.

Modern Infrastructure: Why is server virtualization so tough on traditional storage arrays?

Scott Lowe: Back in the old days, we had physical servers with custom-tailored storage to meet unique workload needs, each with different I/O patterns - log files have different I/O patterns than databases, for example. With virtualization, we've taken all these different workloads with different block sizes and I/O patterns and dumped them all in to the [storage area network] SAN and told it to sort it out on its own. And we're doing this with spinning disk, which is great for sequential type patterns, but not for random I/O. So we're basically asking disk-based storage arrays to do something they were never intended to do.

MI: How does flash solve these storage performance problems?

Lowe: Flash is very well suited to random /IO patterns, and we see a lot of different ways that vendors are using it. We have all-flash arrays, which are super fast. There's also a hybrid approach with a significant flash cache that then gets spun down to disk. How those systems get better performance is to reorder the write operations so they're sequential, the way hard drives really want them.

MI: Explain the appeal of virtual SANs made out of commodity servers and storage and flash.

Lowe: The biggest example of this trend is hyper-converged infrastructure, which basically eliminates the SAN. The beauty of that for a lot of organizations is that the storage environment is often the most expensive outlay that people buy - companies spend tens or hundreds of thousands of dollars on storage.

These systems solve for performance with a hybrid or all-flash approach to storage, but it goes beyond that. Hyper-converged systems also simplify operations. If there's no more SAN, you don't need someone with a storage Ph.D. to manage storage.

MI: Is the model of separate servers and SANs outdated?

Lowe: It's not outdated. It depends on the organization. Some organizations take a hybrid approach to the data center, where they take a single application and put in on hyper-converged infrastructure. But the days of the monolithic SAN are not over. It's going to take some time for converged and hyper-converged offerings to become more robust and scalable than they are today, and for people to change their thinking. It's going to take some time for people to think about going back to the days of direct-attached storage.

This was last published in July 2015

Dig Deeper on Enterprise data storage strategies

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close