Disaster recovery: Keep the power on in a regional disaster

Baltimore Technology Park (BTP) president Jim Weller has launched over 50 centers nationwide. His most recent venture BTP, is a 30,000 square foot hosting facility on the outskirts of Baltimore. In this Q&A, Weller talks about the steps businesses have to take to stay online in a regional disaster.

Baltimore Technology Park (BTP) president Jim Weller has launched over 50 centers nationwide. His most recent venture BTP, is a 30,000 square foot hosting facility on the outskirts of Baltimore. In this Q&A, Weller talks about the steps businesses have to take to stay online in a regional disaster.

During your data center career have you had to enact a disaster recovery plan?

More on data center disaster recovery:
How do I optimize data center facilities for disaster recovery?

Iceland wants your backup data centers

San Francisco data center builds on solid foundation

SunGard taps server virtualization to speed disaster recovery 

 Weller: I was involved in operating some data centers in New York during 9/11 and during the blackout and we were able to keep our data centers alive during those periods. In both those cases we had very few outside issues to deal with the disasters. The facilities ran as designed. The backup procedures worked as planned. So we had very little concern during those events.

Keeping alive during a disaster means making sure that your procedures are tight prior to the disaster. Things like generator testing, load testing, breaker design and fuel supply, need to be taken care of long before the disaster hits so that when a disaster is happening you are not spending time doing things that you normally wouldn't do.

Can you give an example of a backup component that is overlooked?

Weller: Fuel supply is sometimes something that's notable. People like to tout having fuel supply contracts in place. But I can tell you from experience that when you have a city-wide outage, getting your fuel delivered per that contract is not always an easy task. In our cases, we had sufficient fuel supply on hand to make it through the high-impact part of the disaster to where you could get a fuel supply delivered. Manhattan, for example, during 9/11 was not getting fuel deliveries for probably close to five days. So, having enough on hand was critical in that type of event.

How were you able to determine what constituted an adequate supply?

Weller: More is always better. From my experience we like to have a minimum supply of 48 hours worth of fuel on hand. That just comes from experience. According to most studies, you're covered in a blackout with about a two day supply. 9/11 was an exception. That was a pretty long interval because of security issues. We did okay with that one.

I don't think that there are any official parameters out there. But more is always better. We always try to push it up to the physical capabilities of the building; as much as you can [store] based on fire marshal limits and physical capabilities of the building.

Do you have an equipment testing cycle?

Weller: Yes. The first part comes in when you put a new center online. A pretty rigorous conditioning practice needs to be followed. What I mean is that you're pretty much testing all of your components at a full load because you really need to make sure that everything is working properly.

After the conditioning stage, you go into a very regular schedule of testing all your components. And it can be as frequent as weekly on a lot of the devices. A generator run is pretty simple to do. Run that on a weekly basis. We have it done on an automatic basis here. We also test the UPS systems. We test the transfer switches, just about every other component on the site on a very regularly scheduled basis

One thing that might get overlooked is the air conditioning. We're pretty regular about that and about making sure that we have redundancy. For example, if all of your power supplies are tested and they run flawlessly, but you lose an air conditioning unit and you don't have sufficient backup, spare parts or proper servicing capabilities, your center can quickly get to a temperature where you're losing servers and systems need to shut down. It doesn't sound like a critical component, but it is. Just about every piece of data center infrastructure is equally as important.

How has the disaster recovery process changed?

Weller: Five years ago, people talked a lot about disaster recovery, but I think you're finally seeing people taking it seriously. Not only are these items budgeted for, but people are really enacting them. Businesses are highly dependent on their data today, and not just e-businesses. It used to be that you could go without financial data for some time. Today, people don't believe that. People are planning and building their disaster recovery strategies very diligently.

Do your customers play a role in DR plans at your hosting facility?

Weller: They do to some degree, but our customers typically have their own disaster recovery plan in place. There is a notification process and a remedy process for the site itself. But individual customers have their own requirements. Many have duplicate sites in place, so the minute that we notify a customer if there is some type of incident, regardless of the level, they can choose on their own to implement whatever level of DR they might have built into their business model.

This was first published in March 2007

Dig deeper on Data Center Disaster Recovery

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close