Some basic containment techniques will make data center cooling system more energy-efficient.
Server cabinet cooling is one of the most common trouble spots for data center managers. Overheating servers and weak cooling systems cause performance issues and failures, yet clear best practices for cooling your data center are elusive. There are proven cooling techniques, ranging from
Smart cooling protects expensive data center hardware and the critical business operations it supports. At elevated temperatures, highest-performance servers will crash in as little as a few seconds. IT and facility teams should consider a data center design that works effectively and efficiently, as well as cooling ride-through, or the level of cooling needed to keep critical servers operational between a power failure and generator start-up.
In recent years, cooling experienced more changes than any other part of the data center facility because it must keep up with rapid advances in high-performance hardware. Cooling systems are also the most expensive parts of the facility to construct and to operate.
Server rooms used to be refrigerated like meat lockers. This overcooling wasted power and was environmentally irresponsible and unnecessary. In the future, data centers should be cooled without mechanical refrigeration, regardless of the local climate. This is already achievable in some regions, but the majority of data centers will rely on current cooling technologies for years to come. Data center managers must understand the best practices for each cooling approach and how cooling will continue to evolve.
Improving data center cooling with containment
Modern computing equipment will cool itself if the right quantity of air is delivered to it at the right temperature, known as the inlet temperature. All modern and legacy hardware should operate reliably long-term, with air-inlet temperatures as high as 27 degrees Celsius (80.6degrees Fahrenheit), according to the American Society of Heating, Refrigerating and Air-Conditioning Engineers. This temperature provides good cooling with maximum energy efficiency.
Hardware can also handle up to 35 degrees Celsius (95 degrees Fahrenheit) for several days -- a partial cooling failure or a heat wave, for example -- without voiding warranties or significantly increasing failure rates. Energy usage increases as fans speed up, but short-term inefficiencies do not negate energy savings achieved over the long term. To be safe, many data center designers and managers choose to operate at around 24 degrees Celsius (75 degrees Fahrenheit); there is no reason to cool below this.
Hot aisle and cold aisle containment. You need an effective server cooling strategy even at higher ambient temperatures. The best approach is separating hot and cool air. "Containing" the cool air supply and the hot air discharge prevents mixing and requires attention to cabinet layout and gaps.
Cabinets must be arranged front to front and back to back so that aisles of rack fronts -- cool aisles -- alternate with aisles of exhausts -- hot aisles. With the industry norm of higher inlet temperatures than in years past, the term cool aisle containment has replaced cold aisle containment.
Close gaps between cabinets with fillers and ensure that all unused panel spaces are blocked with blanking panels, which are solid plates, in place of computing hardware. Gap fillers and blanking panels prevent hot air released from the back of computing hardware from recirculating to the front and raising the equipment inlet temperature.
Partial containment. Even with these measures in place, air can still flow over the tops of cabinets and around the ends of rows. The ultimate solution is to erect more barriers to contain the air in either the hot or cool aisles. Barriers can be as simple as plastic curtains -- fire-rated and antistatic, of course -- hung at the ends of rows and above cabinets. This is known as partial containment because some air can still leak through the curtain seams, but it's very effective.
Full containment. Full containment requires solid end panels with doors, plus either solid barriers between the tops of cabinets and the ceiling, or ceiling panels at cabinet height. Full containment is more expensive and less flexible than partial containment, so the choice between the two is often based on budget or on an existing facility's design limitations.
In a new data center design, consider full containment first because it's the most effective cooling strategy, and the rest of the piping, electrical and cable tray can be designed to accommodate solid barriers.
About the author:
Robert McFarlane is a principal in charge of data center design at Shen Milsom and Wilke LLC, with more than 35 years of experience. An expert in data center power and cooling, he helped pioneer building cable design and is a corresponding member of ASHRAE TC9.9. McFarlane also teaches at Marist College's Institute for Data Center Professionals.
This was first published in August 2013