This is the second installment in a series on the price of power in the data center.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The temperature in your data center is rising. You buy more air conditioning (AC) units to handle the cooling load. Then you realize you can't power those units. It's time to buy another Power Distribution Unit (PDU). You have no way of accurately measuring how long it will last you, so you go with the biggest option you can handle. Next thing you know, you're facing power bills that make your chief financial officer choke.
Managing data center infrastructure -- HVAC units, power distribution, UPS systems -- has traditionally fallen under the responsibility of facility management. But for several reasons, IT has gotten more involved in procuring these resources. As a result, experts said IT is overbuying infrastructure equipment, which adds to problems IT pros are trying desperately to solve.
Jeanne Harden, project manager of computer operations for the Hillsborough Country Clerk Circuit Court, in Tampa, Fla., and president of AFCOM's Tampa chapter, knows the problem all too well.
Harden has an 80 kW UPS in her facility with another one on its way. She has two 10-ton AC units and she's putting in a third -- for which she has to buy another generator.
According to Harden, these are facility department issues, but it's her department that has to worry about the servers. She needed to be proactive.
"I'm the one seeking out vendors, researching options and paying for it out of my budget," Harden said.
Harden said the move from mainframes and midrange servers to racks of x86 Dell machines is what has made cooling more difficult, and she has had to scramble to reorganize her data center around the problems.
"In most cases, it's probably true that facilities departments don't respond quickly enough to the needs of IT," said Rick Correia, director of facilities, APT Management, Woburn, Mass., and an HVAC specialist. But, he added, it's not that they're clueless or insensitive, it's just that the facility manager has to consider the entire complex when making decisions; it can't always be about the data center.
"[IT] is just a small segment of the entire picture -- and facilities can't sacrifice the entire plant for one room. That's why a data center should have its own independent system," Correia said. "The needs of a data center are unique and require unique solutions. An independent system can be more easily managed by the facilities department."
It's a scenario International Facility Management Association (IFMA) board member Paul Doherty, has seen unfold over and over again in his organization. Doherty is president of IFMA's information technology council and a managing director at General Land Corp., a San Diego, Calif.-based real estate services firm,
According to Doherty, IT changes are often reactive rather than proactive. "It's not a knock on IT. Markets change," Doherty said. "People look at business continuity issues in the context of a natural disaster, but people aren't planning for disaster recovery from market conditions. What about explosive growth or contraction?"
Other situations that would require a fast facility turnaround include acquisitions or sudden compliance requirements. IT wants a new infrastructure in two weeks, facilities says six months.
"Business moves fast. Trends in IT, such as server consolidation or VoIP [Voice over Internet Protocol] are driving challenges for facility management at the power layer," said Russell Senesac, product manager at West Kingston, R.I.-based American Power Conversion Corp. "If IBM tells your IT department it can save 50% on your IT budget if you buy blade servers, you buy them. But facilities won't be able to upgrade your data center for months. Facilities has become a bottleneck for business change, so IT is getting more proactive.
"Who's got the dollars?" Senesac asked. "Facility managers are [traditionally] buying generators and HVAC, but if IT wants to spend $2 million on a blade server system, they're going to roll up the cost of HVAC and generators into that budget."
Bigger isn't better
But when data center environmental issues force IT departments to take matters into their own hands, the result is often equipment overload.
"At the power and cooling infrastructure level, the biggest inefficiency we see is overbuying," Senesac said. "Let's say you need a 100 kW UPS to support your needs today. But instead of [buying a 100 kW UPS], you buy a megawatt because you're afraid you're going to need 500 kW in a few years."
According to Senesac, UPS systems and AC units need to be run close to capacity to get the most efficiency.
"Companies don't want to run a UPS at the edge because they think that if someone plugs in a vacuum cleaner, the whole place will crash," Senesac said. "So most run at 80% capacity. But at 65% and below the efficiency really drops off."
Senesac said the same is true for cooling, one of the biggest energy hogs in the data center. Feeding the hottest air into your AC unit gains the most efficiency, but when people get hot spots, they often think more cooling will solve the problem. Instead, adding more cooling units that run at a lower capacity is completely inefficient way to deal with the problem.
When many IT departments develop hot spots, they buy more cooling capacity, but it's not more cooling they need. They need to route it properly.
"People turn down their CRAC units to 45 degrees, but that's not the issue," Senesac said. "You can make it as cold as you want, but if it's not getting to the right place, you're just wasting power. They think that throwing more dollars at a problem will make it go away."
But Correia disagreed with the suggestion that data center managers are carelessly wasting energy. He doesn't think IT should be viewed as shortsighted for bringing in air conditioners to cool things down. "There's no question that if they're just throwing air conditioners into the room, they're not running at peak efficiency. But if the space isn't designed to handle the heat load, then they're doing the next logical thing -- for the short term, at least."
Work it out
Vern Brownell, chief technology officer of Marlborough, Mass.-based blade manufacturer Egenera, has dealt with the issue from both sides. He spent 11 years running the IT department at Wall Street giant Goldman Sachs. He said when IT people order more and more equipment, facilities managers are left holding the bag.
"Based on my experience at Goldman, unfortunately, facilities people are always finding out [about technology purchases] afterward," Brownell said. "A lot of IT folks are out there purchasing equipment, changing architectures and doing what works for IT, but not necessarily consulting with facilities people who understand the downstream effects on heat and power over a long-term basis."
The moral of this story is to tighten the relationship between IT and facility management.
Correia added that consultants and vendors are always pitching the optimum solutions, but said many of them need a reality check. "Any professional, whether in IT or facilities, strives to be as efficient and energy conscious as possible, but they also have a job to get done and sometimes that means doing things that are not perfect. "It's easy from the outside looking in to say what needs to be done," he said. "It's another to actually work your way through all the red tape to get it done -- and in the meantime, your data center is as hot as hell. That doesn't do anybody any good."
Let us know what you think about the story; e-mail: Matt Stansberry, News Editor