Air conditioning (AC) is the biggest power hog in the data center -- and you don't need a degree in engineering...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
to figure that one out. In fact, it's impossible to talk about power consumption in the data center without talking about cooling. It seems obvious then, that if you want to get a handle of the amount of electricity you use, you start with the AC.
For a variety of reasons, data centers keep throwing AC at the problem -- and more often than not, it simply doesn't work. Sometimes it's unavoidable (poorly designed rooms make efficiency nearly impossible), but according to experts there are ways to keep your data center reasonably cool and keep your energy costs down.
The key, according to experts, is to route the cool air you have to the right place.
The problem is, directing cold air is like trying to herd cats. Air is unpredictable. Your cooling unit is sucking in air, cooling it and then throwing it up through a perforated floor. But you have little control over where that air is actually ending up.
In a traditional data center, cold air runs parallel to the floor under raised tiles. You then direct the air to turn 90 degrees and come up through perforations to the rack. From there, it's expected to turn 90 degrees again and get sucked into the servers.
According to Russell Senesac, product manager at West Kingston, R.I.-based American Power Conversion Corp., that isn't practical. "What happens when you've got the bottom servers sucking in all of the cold air? You're going to get hot spots."
Experts said raised floors may in fact become obsolete for that very reason. According to Senesac, raised floors go back to the days of liquid-cooled mainframes. But now, there are just too many obstructions under the floor, making it inefficient to direct air.
"Floor manufacturers say you should get 700 CFM [cubic feet per minute] coming through the perforated tiles," said Bob Doherty, data center manager at Beth Israel Deaconess Medical Center in Boston and a board member with the AFCOM's Data Center Institute. "But in reality, I've never seen a data center with a raised floor get anywhere near that."
According to data center design expert, Robert E. McFarlane, president of the Interport Division of New York-based Shen, Milsom & Wilke Inc., under-floor air conditioners aren't perfect -- but many data centers are stuck with them, and their efficiency depends on how they are set up.
McFarlane said raised floors should be at least 18 inches high, and preferably 24 to 30 inches, to hold the necessary cable bundles without impeding the high volumes of air flow. But he also said those levels aren't realistic for buildings that weren't designed with that extra height.
Overhead cooling or other designs that use the ceiling plenum as the return air path are another option. But McFarlane warns that these systems can require supplemental fans and additional ductwork that must be coordinated with the overhead cable tray (now needed because there is no raised floor). In short, McFarlane said overhead cooling can become a more complex and expensive design than a good raised floor.
Another workaround is localized cooling. These options include specialized enclosures APC promotes with their InfraStruXure line. Other options include overhead spot cooling or liquid-cooled cabinets, such as what Columbus, Ohio-based Liebert Corp. offers. The Liebert X-treme Density heat removal system, for example, uses overhead fans and a waterless refrigerant pump to maintain safe rack temperatures.
Regardless of what option works best in your environment, the message remains the same -- for optimal efficiency, direct cooling where it needs to go.
Let us know what you think about the story; e-mail: Matt Stansberry, News Editor