Develop a solid virtualization capacity planning strategy
A comprehensive collection of articles, videos and more, hand-picked by our editors
There have been many advances in technology in the last few years in the data center - all in an effort to handle the demands of today’s business environment.
There’s more data to process and more storage needed to keep it all. When it’s time to figure out how to plan for your future needs, there are numerous variables to juggle and take into account.
I’m Tom Walat, site editor for SearchDataCenter.com and with me today is Chris Crosby, CEO of Compass Data Centers to talk about the capacity planning dilemma. Hi Chris, and thanks for joining me today.
Chris Crosby: Thanks, Tom. Glad to be here.
You’ve compared data center capacity planning to a Gordian knot. Can you explain what you mean by that?
Chris Crosby: Well the Gordian knot, historically was the toughest thing to figure out, and there’s a lot of parts to it. It’s a very, very intricate problem with a lot of the dependencies. So, Gordian knot, we thought was probably the best analogy as it relates to IT capacity planning.
What are the issues that affect capacity planning?
Crosby: We really look at six different issues as you get into capacity planning. The simple ones are hardware, like your servers; your software applications; and your storage that is necessary. I think the ones that oftentimes come into play that aren’t always as well recognized are how acquisitions and mergers impact things; new business applications that weren’t thought of as well as the constant dilemma that any enterprise faces of capital expenditure versus operating expense and which way the company needs to go.
What effect do hardware refreshes have on planning?
Crosby: Hardware refreshes are one of the most overlooked components. Hardware refreshes typically are every three to five years in terms of replacing servers and storage equipment, and often times when planning it’s done in states. So the assumption is that if hardware usage and power usage is going up on a graph, the refresh where that new hardware comes in and can do many more millions of processing cycles or many more terabytes of storage for the same amount of power. So it ends up actually shifting the curve back down again and really makes for data centers to last much longer than what they ‘re viewed at the time when most capacity planning is done.
How would you characterize data center growth?
Crosby: I think it’s unpredictably predictable. There are so many different factors that come in, I don’t think you can put your finger on any one factor. But the constant it has been consistent growth in a CAGR (Compound Annual Growth Rate) if you will that 8 to 12 percent over time and while you have major disruptive pieces that come in and can, for example, reduce the data center’s, in theory, demand,
You have other things that come in that hadn’t even been contemplated. A great example of that is the iPhones which didn’t exist so many years ago and what those types of mobile devices have on a corporate environment versus, let’s say virtualization, which in theory was going to reduce a lot of components. So, there’s so many factors, I think, predicting in the spaces probably where you need The Great Carnack more than anything else.
You’ve talked about chunky planning. Can you explain what that is and what are the benefits of chunky planning?
Crosby: Sure. So going back to the Gordian knot analogy, Alexander came in and some historians say he cheated by just using the sword to cut the knot. But there is an element of that as well, I think we get so wrapped around the axle as IT professionals and trying to come up with all those different factors and trying to look into the future. Really, the way that we buy IT is pretty chunky in that we need X amount of more servers, X amount of more storage capacity, X amount of more network as we continue to grow our applications inside of our businesses. And yet, we traditionally have not looked at the data centers the same thing, when you can make the data center match in chunky bits, it’s the most logical scenario. You wouldn’t add - on the IT hardware side - you wouldn’t add 38 megabytes to your storage system. You’d work in a terabyte chunk as a for example and we need to have the same mentality as we look at data centers to add in those chunky sizes.
Are there any tools that are available to people to help them with this planning process? Are there vendors that are more prominent on the scene for helping people?
Crosby: You know, on the planning process side I think it’s a difficult scenario but the best scenario you can have is on historical data. For those folks that have been measuring at least, even on the power side, hopefully for the past several years as that’s been a very, very big issue for people that track PUE and things along those lines. When you have several years’ worth of data and one of these events that occur - you know, a merger, an acquisition, or a new application coming online, or a hardware refresh - that can really help you to get a lens into your organization. So I really think that if you look at it, some of the DCIM tools of the in-place legacy facilities, even if they’re very simple, can really help you to get a view of what the future may hold.