What has changed the most since you started managing large scale data centers? Jeff Lowenberg: The mechanics of...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
what's inside a data center has changed a little bit, but nothing has made a huge difference. It's really the power requirements of the servers that have been the driving force.
A lot of data centers built in the mid to late 1990s are woefully underpowered -- we have those datacenters. They're certainly outdated by wattage per square foot. But they're still usable; they have all the redundancies built into them. And the leases can go on for years.
How do you get around that issue? Do you retrofit the buildings?
Lowenberg: The first building we had ran at 50 watts per square foot -- a pre-existing facility from a company that went under in the dotcom bust. We looked at retrofitting and it was cheaper to do build new. If you want to go in and upgrade the power and air conditioning, you might as well go to a new facility so you don't affect existing customers.
If we've got to move customers out of a data center and into a new one, it's something I certainly don't look forward to and don't want to do again. No matter how good the planning is.
We're opening a new facility on Thursday --12,500 square feet of raised floor, 150 watts per square foot. The data center will house between 8 and 9 thousand servers. We have another 12,500 square feet of raised floor scheduled to open next month.
Can you see going denser than 150 watts per square foot?
Lowenberg: We've found that the way we lay out the data center, you could fit more in if you tried. But if you get more than 150 watts per square foot you need to move to supplemental cooling and that gets more expensive. This is the model that has worked out best for us. I could add a bunch more cooling and more power, but it's costing me more per slot than if I built another data center. And it makes the data center a lot more complex which makes our lives more difficult.
Are you looking outside of Texas for new data center growth?
Lowenberg: We're currently in a site selection process. In Texas the power is fairly expensive. There is not a lot to lower your costs. There are a number of energy markets that cost substantially less than Texas. For example, Microsoft and Google are building out in the Northwest for really cheap hydroelectric power. We're looking into the Midwest.
Who are you seeing doing the most to combat data center power issues?
Lowenberg: The main group that we subscribe to is The Uptime Institute -- the organization that defines the Data Center Tier Standards. The Uptime Institute provides a lot of information, and it uses our data centers for benchmarking research. Thanks to The Uptime Institute, we've been making sure there is no cold air going anywhere but the cold aisles. I've got guys going out with calking guns sealing up every crack in the floor, sealing where the concrete wall and sheetrock meet. Every hole is covered with a grommet or sealed. When we got through with that, the power I was using to cool the data center went down 10%. That's $10,000 per month in electricity.
Does the Planet use the Tier Rating system?
Lowenberg: We take their standards and build to them. [The Uptime Institute] points out any shortcomings; makes suggestions to make things cheaper or easier. We follow their best practices, but we are not certified by them. It is a very time consuming process.
All of our data centers are built to a Tier II standard. We've got a lot of redundancy in our power, generators and cooling in all of our facilities. At our Houston data centers, which we've had since late 2000, there has not been any facility-wide downtime.
How do you achieve that goal?
Lowenberg: We overdo our preventive maintenance -- using higher standards than the manufacturers' recommendations. Most of the UPS manufacturers tell you to load test batteries once a year, we do it twice. Most electricians tell you to do an infra-red scan to make sure breaker points don't overheat once a year, we do it twice.
Diesel generator fuel -- you can keep it good two and half years with polishers. But people have no idea how important it is to have those tanks topped off. The more room in the top, the more condensation happens. The primary point of the polishing is to pull water out of the diesel fuel. We have contracts with companies to top our tanks off every other month and make recommendations on additives.
Industry forums and magazines have stories about companies with generators and UPS-es and they cant' start. The batteries are dead or diesel has been sitting forever. We've got customers counting on us to keep their businesses up.
Have you looked into data center liquid cooling?
Lowenberg: Liquid cooling systems primarily look at cooling individual cabinets. It's very expensive and offers little redundancy. If it goes down you have to shut down the cabinet. With air, I lose a CRAC unit, the air temp goes up a few degrees until it's fixed.
God forbid if it springs a leak.
You see that mostly in smaller IT operations, 5-10-20 cabinets. Doing that on a large scale doesn't make economic sense.
Editor's Note:The issue of data center efficiency is on the top of the docket for the 2007 Uptime Institute Symposium, coming up March 4-7 in Orlando. SearchDataCenter.com will be participating as co-moderators at the symposium.
Sign up to download the Uptime Institute's Whitepaper on energy efficiency.
Let us know what you think about the article; e-mail: Matt Stansberry, Site Editor.