Data center design expert Steven Harris of the Skokie, Ill.-based consulting firm Forsythe Technology Inc., says...
the time is ripe for data center turnover as many companies hit a tipping point with aging facilities. Harris says rip and replace is less risky than renovating operational data centers.
Why are so many data centers running into design problems today?
Steven Harris: If you look at data centers even 10 years ago, they were mainframe-oriented and mid-range oriented. The design was required to power, cool and to house a mainframe. I've seen a lot of data centers where the transformation has gone from mainframe to server fairly quickly, and they end up with very large data centers from a floor-space standpoint and minimal power and cooling.
Most orgs have a hard time seeing more than three years. The crystal ball just doesn't work that well. Servers -- especially in the last five years -- have significantly densified. You can get a lot more processing capabilities with less floor or rack space, but on the other side of that equation, the power and the cooling required has significantly increased. So, if you start to proliferate that within a typical data center environment, you can quickly see how you can run into power and cooling issues.
Open systems design has been going on for years, why is now a tipping point?
Harris: Things seem to run in cycles. If you look at the late 90s everyone was preparing for Y2K. That was everybody's focus. Following the year 2000, the economy took a dive, and budgets dried up for capital improvements and data center expansions. It was everything most companies could do, buying servers here and there for the next couple of years.
As the economy improves, we are midway through 2006, a lot has changed in seven years and companies are starting to realize that their data centers have some age to them. The last time that they may have done something was in prep for Y2K and that may have been minimal on the facility side because they were spending so much money on the IT side. All of sudden I'm 10 years out of date. I don't have enough power or enough cooling, and the demands of my internal and external customers or clients are placing on me for new equipment to accommodate new IT equipment is staggering and I need to do something to my data center.
Does it make more sense to retrofit an old data center or build a new one?
Harris: Well, it depends. There are still some data centers, where the transformation from mainframe to server has left them with a considerably large footprint of floorspace, and there's an ability to potentially segregate the utilized portion of the data center from the non-utilized portion and do some kind of a retrofit or an upgrade to the side that's not up and operational.
When you're dealing with a data center that's fairly full from an IT processing standpoint, it becomes very difficult -- very risky, frankly, to do an upgrade. It's almost akin to changing an airplane's engines while it's in flight. The risk and the cost and the project timeline exponentially go up when you're dealing with an environment where you really can't tolerate an outage. Leaving your old environment in place, and then doing an IT relocation, is significantly less risky.
It's less risky to build out a new data center?
Harris: It's less risky to build new. In today's world, we're seeing more and more clients with the requirement to be 7 by 24 by 365 and no ability for downtime. Even scheduled downtime is becoming a thing of the past. When you're dealing with that kind of environment, and you're making the decision to upgrade or expand, you're almost looking at a requirement to do something someplace else.
In a lot of cases, the existing data center could very easily, from an infrastructure standpoint, accommodate the upgrade or expansion. The question becomes: How do you do all that while at the same time not causing an interruption, either planned or unplanned to your processing environment?
How do you figure out when you need a new data center?
Harris: In a lot of cases, it's having a data center vulnerability assessment, where a company such as Forsythe comes in and takes a look at the infrastructure and its capabilities and comes back with a report that says you're falling short in the following areas and/or you've got single points of failure in the design of your data center that potentially expose you to unplanned outages. [Also] if you're running out of floor space -- you've got 5,000 square foot data center and you're 90% percent occupied.
So, how do you avoid being blindsided and having to do this again in five years?
Harris: From a design standpoint, if you're looking at a new data center -- make sure that you look at historical averages of percentages of growth year over year, especially over the past three years. You get an idea of how much additional floor space [you need] for every server that comes in, and each server relates back to floor space, power, cooling. I can project with some relative certainty that my data center will require this much more floor space in year one, in year three, in year seven, in year 10. And you're looking basically at organic growth. But, no one can really predict extraordinary events like mergers and consolidations.