Published: 01 Feb 2013
Lately I’ve noticed that IT trends within an organization and IT trends within a data center are like classical mechanics and quantum mechanics, respectively. If it’s been a while since your last physics course, classical mechanics govern the motion of large objects. However, once the object gets small enough—like to the size of an atom—all the rules change. Those new rules are quantum mechanics.
Data centers are like that, too. Outside a data center, IT departments preach consolidation and centralization. They push data and processing back into the data center with virtualization and cloud computing. They even push the desktops into the data center with virtual desktop infrastructure.
But within a data center, the trend is the opposite. Applications and systems that scale up with a “Just throw hardware at it” approach are yielding to those that scale out. The result is a swarm of smaller machines working together versus a monolith, or local disk arrays orchestrated by software to provide the services of a more traditional centralized array.
Decentralization trumps centralization
So the operative question is, why aren’t data centers following their own rules?
Frankly, there’s only one reason for this: cost. The cost of a traditional centralized disk array is enormous compared to the performance it delivers, especially when you factor in the complex way servers attach to it, and the way the storage systems and networks have to be managed and monitored. In contrast, local storage is easy. Absolutely everybody knows how it works, so it’s relatively immune to human error. It’s fast enough for most workloads, especially with many RAID controllers now offering native solid-state drive caching (to inexpensive commodity drives, no less). And every server monitoring tool on the planet can monitor local storage, so it’s one less thing you have to pay for, be trained on, implement and manage.
Systems that scale out are trendy for similar reasons. With a monolithic system, you need to size for peak workloads. The rest of the year, all that capacity is wasted, or at least hard to use. A scale-out system can be sized to exactly what is needed, with the capacity returned to a pool after peaks subside. This also appeals to organizations pushing workloads to the cloud. As hybrid cloud technologies mature, the idea of “cloud bursting,” or temporarily pushing workloads into a public cloud, gets more realistic every year. It also means the ability to take full advantage of concepts like Amazon Elastic Compute Cloud’s Spot Instances to run workloads in EC2 when the price to do so is lower than your own data center’s operational costs.
Will the trend of internal data center decentralization continue, despite the trend of organizational IT centralization? Most things in IT are cyclical, so it wouldn’t surprise me if, in 10 years, we started centralizing data centers again. Until then, I think we should get used to IT not practicing what it preaches—for the good of our bottom lines.
About the author:
Bob Plankers is a virtualization and cloud architect at a major Midwestern university. He is also the author of The Lone Sysadmin blog.