New data center facilities are incredibly expensive to design, build, manage and maintain, but a new build isn't necessarily the answer in every circumstance. While some organizations can certainly justify the investment in new facilities, there are many others that want options and alternatives that can give them the facilities they need to run their business without breaking the bank or taking years to deploy. Alternatives to traditional data center construction will be an important theme at the upcoming Uptime Institute Symposium slated to run May 9-12 in Santa Clara, Calif.
Steve Bigelow, senior technology editor, and Matt Stansberry, director of content and publications for the Uptime Institute, discuss the range and implications of the potential data center construction alternatives that are out there.
You can also listen to a podcast of this Q&A here:
Bigelow: New data center construction is incredibly expensive, but what alternatives to a formal build are available when a business needs more computing resources than they have available?
Stansberry: One of the biggest trends in data center construction is the phased, modular data center build out. Companies used to build out a facility with all of the capacity they thought they would need for the lifecycle of the building, and this led to a lot of wasted capital and wasted energy. You've got a building specked-out to run full of servers, but you ramp up slow and you're wasting money. But now, many companies are constructing the building envelope and fitting out the mechanical components in small increments. This offers serious advantages. First, you don't end up with future hardware requirements outpacing your mechanical infrastructure, it lowers capital expenses and it's more energy efficient. It also allows you to build out a facility with multiple tier levels. For example, if you have an application with high-availability requirements, you can build out a module to be Tier III or Tier IV, and apps with lower availability requirements can go in lower Tier modules that cost a lot less than building out the whole facility at the higher Tier level. This is one of the opening sessions at the symposium, and I think it will be a very popular topic.
Bigelow: How do you see private and public cloud deployments changing over the next few years?
Stansberry: There will be a dramatic growth in cloud services. According to our latest survey stats, about 75% of our data center owners and operators said they are using or will use public or private clouds in the coming year. You have to remember, the Uptime Institute's typical member base is fairly conservative, high-availability, traditional Fortune 500 enterprises. So that number was shocking, to me at least. And scalability was the number one reason people were turning to the cloud. Rapidly fluctuating IT demand is hard for companies to deal with and cloud computing may be an option.
We're going to have a host of sessions on cloud computing at the symposium, from how-to sessions on implementing cloud into your capacity planning, to looking at cloud computing to reduce your carbon footprint, which is especially important in places like Europe, where carbon cap-and-trade legislation is already in place. We'll have some primer sessions on how data center operators should prepare their organizations to engage public clouds. I'd been a cloud skeptic – at least as far as its importance to enterprise data center managers was concerned -- but now, I've drank the Kool-Aid. It's going to be a disruptive technology for our audience.
Bigelow: When do portable- or container-based data center modules make sense?
Stansberry: Companies that are considering containers need to weigh all the business requirements and costs associated with containers versus building a brick and mortar data center. But, I can say that there's a huge difference from the original generation one data center containers launched four to five years ago, and the modern, purpose-built modules being built today.
Companies like SGI/Rackable, I/O data centers, HP, Dell, etc., are all coming out with new designs, literally stamping out production-scale data centers with great features for free cooling and maximum efficiency. And the new generations are much smaller, which was a major complaint of generation one containers. The deployment increments were too big. At the symposium, our container panel will be one of the best discussions on this topic in the industry. Tier1Research's Jason Schafer will present his latest research on containerized data centers, followed by a panel, including Dean Nelson, who recently completed a major containerized data center RFP process for eBay, and Patrick Yantz, who deployed Microsoft's containerized data center design in Chicago and now designs next-generation modular data centers for SGI.
Bigelow: Are there any other build alternatives on the horizon that we should be paying attention to?
Stansberry: It's about consolidation and capacity planning. Data center managers don't make enough time for strategic planning. As budget cuts reduce staffing levels and operations teams are asked to do more work with fewer resources, managers get locked into the day-to-day firefighting; they never get out of the reactionary mode to plan ahead. The consequences can be dire. I wouldn't want to be the manager who has to go to the executive team to explain why a data center ran out of capacity sooner than expected.
So if you're running out of capacity, what are you going to do about it, other than plunk down a nine-figure capital expense? Are you considering moving compute loads to the cloud, increasing virtualization investments, evaluating colocation options? These aren't the kinds of projects you can handle if you don't make time for formal strategic planning across all of the silos in the organization, from server and storage management to the facilities team. Uptime Institute's digital infrastructure team will be leading intensive workshops at the symposium on how to do bring the teams together to do this kind of planning.