Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Modern data center cooling systems: No muss, no fuss, no waste

If old school data center cooling strategies leave your energy bill sky high, try more modern systems and bring it back down to earth.

What is lacking in your current data center cooling systems? Traditional data center cooling systems waste a lot of energy -- and money.

Techwatch logoYou probably have a few computer room air conditioning units (CRACs) continuously pushing cooled air through perforated plates in a raised floor to maintain a suitable temperature for all the racks in the room. There is a better way. Several new cooling approaches and technologies promise short capital payback coupled with ongoing financial benefits.

Not too hot or too cold, but just right

First, make sure you're not running the data center temperature too low.

The American Society of Heating, Refrigeration and Air-Conditioning Engineers (ASHRAE) has been tracking recommended and allowable temperatures for defined types of data centers since 2004, in which time the maximum high-end temperature increased from 77 degrees Fahrenheit (25 degrees Celsius) to 81°F (27°C) in 2008. By 2011, ASHRAE defined a range of data center types, wherein maximum allowable temperatures could reach 113°F (45°C).

The easiest way to save money is to reduce how many CRAC units are running. A higher-temperature data center matches needs against risk and requires far less cooling; this means fewer CRAC units. In some data centers, half of the CRAC units can be turned off, directly reducing energy and maintenance costs.

Using variable-speed CRAC units instead of fixed-speed ones is another option: The CRAC units only run at the speed required to maintain the desired temperature. CRAC units operate most effectively when running at 100%, so some variable-speed systems are not fully optimized when operating at partial load.

Running standard fixed-rate CRAC units at 100% capacity can build up thermal inertia, another cooling strategy that reduces costs. The data center cools considerably below its target temperature, then the CRAC units are switched off. The data center is then allowed to warm up until it reaches a defined point, and the CRAC units come back on.

Data center cooling diagram
Figure 1. Proper hot and cold aisle setups sandwich server racks in a data center.

While these methods curb energy spending, straightforward volumetric data center cooling systems remain wasteful, as the majority of the cooled air will not come into close enough contact with any IT equipment to effectively cool it.

Hot and cold aisles require a smaller volume of cooling air if set up properly around server racks (see Figure 1).

The spaces between facing server racks are enclosed, with a roof placed over the racks and doors at either end. Cold air blows into the enclosed space and blanking plates prevent cold air from leaking out of the racks. Ducting directs the cold air to the hottest parts of the equipment. Effective aisle-containment systems can be highly engineered or simple homegrown approaches -- using polycarbonate to cover the racks as a roof and polypropylene sheeting as the doors at each end, for example.

The hot air then warms the data center or is incorporated into air and water heating systems throughout the building.

Aim to minimize the volume of air to cool via targeted cooling, with each rack its own contained system within the data center. Commercial options include Chatsworth Towers: self-contained systems that have a 19" rack inside and take cooling air from bottom to top without the air touching the rest of the data center air at all.

Set your cooling systems free

Maximum modern cooling efficiency achieved? Not so fast. In certain climates, higher temperature operation opens up the possibility of free air cooling, eliminating CRAC units. Ambient external air that remains below 77°F (25°C) will cool a data center that operates at 86°F (30°C) without additional mechanical cooling, provided moisture levels are low.

KyotoCooling diagram
Figure 2. The Kyoto Wheel cooling design.

Simply ducting external air into the server room can foster thermal hot spots or dump dust and contaminants into the data center. This is where new designs, such as the Kyoto Wheel comes into play.

A corrugated metal wheel, approximately 10 feet (3 meters) in diameter, rotates slowly through a two-compartment space. The data center's hot air flows through one space, transferring its heat to the metal in the wheel, then flows back into the data center. Cold external air flows through the other space, absorbing the heat from the metal and exhausting to the outside air.

The data center air loop is enclosed, and the small volumes of air that transferred as the wheel rotates will carry only very small amounts of particulates or moisture from the external compartment to the data center space. The wheel itself acts partially as a filter.

The low-speed fans and motors the Kyoto cooling method uses require little maintenance -- wheel cleaning and motor checks -- and power; most adopters go for solar power and a battery backup. This method offers a lifetime estimated conservatively at 25 years.

When there's no chill in the air, try water

Adiabatic cooling
Figure 3. Water cooling methods for data centers.

Aside from these air-based methods, another approach is adiabatic cooling, using the cooling effect of water as it evaporates (see Figure 3).

This is effective for data centers located in warmer climates, where direct environmental heat will evaporate water from the wet filters, cooling the air pulled through the filters. While the two-chamber system keeps contaminants from entering the data center from the external air, filters must be changed on a regular basis to remove particulates. Also, the air's moisture content may need to be adjusted to prevent moisture condensing on IT equipment in the data center.

For data center operators who want to run at extreme equipment densities with high thermal profiles in warm climates, consider direct water cooling. IBM advanced water coolings to extraordinary levels in its Aquasar and Liebniz SuperMUC supercomputer systems. The system gets around old problem of water mixing with electricity -- negative pressure sucks the water around the system, rather than pumps pushing it. If a leak occurs, air is pulled into the system, rather than water pushed out into the data center. Advanced sensors rapidly identify where a leak occurs, and a modular construction allows repairs while the rest of the system continues running.

IBM uses a hot water inlet for the cooling liquid, which may seem strange. But for such a targeted system, the water hotter than 86°F can ensure that components such as CPUs are cooled to within operating parameters and that the outlet water temperature can be around 113°F. The high-temperature outlet water is ideal for hot water servicing other parts of the building via heat exchangers. Besides lowering energy usage in the data center by around 40%, this can help further savings in energy used in the general facility operations.

For even more thorough liquid cooling, investigate fully immersive systems. Systems from Iceotope, Green Revolution Cooling and other companies cover the whole server -- or other pieces of IT equipment -- with a thermally conductive but electrically isolating liquid. These systems are ideal for GPU servers with hundreds of cores running in high-density configurations or for massive computing densities with hot-running CPUs, and can deal with 100 kW+ per bath -- which is essentially an enclosed rack on its side. Some immersive systems are run at a liquid temperature of 140°F (60°C).

Fans are unnecessary in an immersive environment. Again, as the engineered liquids are far better at removing heat than air or water, the equipment can run at higher temperatures and allow heat recovery from the liquid.

Keep your cool

These systems are the main methods to keep data center equipment cool for different operations and different environmental conditions. But they all require monitoring.

This is where data center infrastructure management (DCIM) comes in. The use of thermal sensors and infrared detectors help create a map of existing hot spots in a data center. Computational fluid dynamics analysis enables "what-if?" scenarios to see how new cooling flows and different inlet temperatures will work with different systems.

With DCIM in place, continued monitoring will identify hot spots rapidly, allowing systems to be slowed down, turned off and replaced as necessary. In many cases, the sudden appearance of a hot spot indicates an imminent failure in the equipment -- being able to pick this up before it happens benefits systems uptime and availability. Vendors in the DCIM space include Emerson, Siemens, Nlyte, Romonet and Eaton, among others.

An old-fashioned approach to data center cooling is practically guaranteed to waste money. Follow new guidelines and evaluate new approaches to cool data centers effectively for lower costs and easier maintenance.

This was last published in October 2013

Dig Deeper on Data center design and facilities

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Hot isle/cold isle is a good step in the right direction. The ultimate goal should be: 1. have servers and other IT equipment operate at higher temperatures and 2. have the cooling process right at the source (at microprocessors). The other aspect industry will be exploring more and more is energy harvesting from the waste heat, similarly to co-gen concepts in power systems. It's coming.
More easy to control envorinment
The gap between current operational practices and meeting the capacity utilization challenge:

o Capacity fragmentation cannot be solved by a design solution because future IT configurations are unknown at the time of design. Fragmentation problems must be addressed continuously as they occur as a function of data center operations.
o DCIM tools collect and present information about what is happening in the data center
o DCIM tools do not show the relationship between resource distributions and the IT configuration
o DCIM tools do not track the cooling distribution
o For these reasons, DCIM tools do not track capacity utilization and are insufficient to manage capacity

- The need:

o A means to predict the impact of deviations from the design configuration on IT resource distributions and on the capability of the facility to meet the original compute capacity specification
o A means to track and manage capacity
o A set of tools and techniques to defragment IT resources (reclaim lost capacity) that have become fragmented as a result of deviations from the design configuration

- The solution: Predictive DCIM = Predictive Modeling

o A compute or virtual model of the data center system (building + IT configuration) that can predict the impact of deviations from the design configuration on IT resource distributions, compute capacity and server availability
o A predictive model to defragment the IT resource distributions and reclaim lost capacity