Immersion cooling and other source-of-heat options for cooling data centers

Source-of-heat cooling can be energy-efficient and free up data center design. Immersion cooling, a new source-of-heat option, has its pros and cons.

This Content Component encountered an error

Source-of-heat data-center cooling options now include immersion cooling and direct liquid-cooling systems. As always, the goal is safe heat removal with minimal energy consumption.

Locating cooling units close to computing equipment significantly reduces the fan energy needed to push that cool air around a room, under the floor and through ducts. Your source-of-heat cooler choice depends on room design, accessibility, cost and other factors.

Circulating fluid -- water or refrigerant -- conducts heat away from the data center through pipes from the cooling unit. While leaks are a concern with water in the data center, proper installation, leak detection systems and drains mitigate the risk of damaging expensive hardware. If you're not comfortable with water coursing through your data center, the alternative is a refrigerant that leaks as a harmless gas. But refrigerant piping is more difficult and expensive than water plumping to install, balance and alter.

Many source-of-heat coolers remove only sensible heat, or heat hot enough for humans to feel it. Sensible heat produced by electronics equipment contains no moisture. Therefore, many source-of-heat coolers have no humidity control.

Since many source-of-heat units don't operate at low heat loads, data centers with lower-heat-density racks and cabinets require conventional air conditioners, such as perimeter computer room air conditioners . In other designs, separate humidification control might be all that's needed to supplement source-of-heat coolers.

Source-of-heat cooling options

Immersion cooling. Immersion cooling is the newest source-of-heat option for server rooms, having come to market in 2011.

With only small modifications to the disk drives, computing equipment is fully immersed in a nonconductive oil bath that conducts heat directly away from circuits. In the event of a failure, the oil bath provides an enormous thermal mass to keep equipment cooled with only a small circulating pump run on backup power.

While highly effective, immersion cooling requires modified hardware and makes the equipment messy to work on.

In-row air conditioners. These cooling units are packaged like equipment cabinets to be intermixed within or at the ends of cabinet rows. They pull in hot return air from the hot aisle, recool it and discharge it into the cool aisle. Some units simply discharge air into the cool aisle, while others can direct air toward the cabinets that need it most.

In-row cooling systems. Cooling in-row is highly efficient, especially as part of a contained cooling environment. Temperature sensors on the front of adjacent cabinets control fan speed and cooling capacity, because the inlet temperature is what matters.

In-row coolers (IRCs) are available for chilled-water, condenser-water and refrigerant-based heat transfer, with and without humidity control. Chilled-water and pumped refrigerant units use little power, so they can also be maintained on an uninterruptible power supply (UPS) for a high-density cooling ride-through after a power failure. Just keep enough chilled water reserve in the pipes and run small pumps on the UPS to circulate the water.

IRCs can be relocated around the data center if cooling needs change.

Above-cabinet cooling. Above-cabinet cooling systems discharge cool air directly in front of the cabinets that need it, and pull hot air in over the cabinets' tops. Some units mount atop cabinets; others are suspended from the ceiling in the middle of the cool aisle. Look into using above-cabinet units to form the cool-aisle ceiling in contained air designs.

Overhead coolers target very high-density cabinets from 8 kW to 25 kW, and are highly energy-efficient. They remove only sensible heat, so they supplement conventional cooling. Overhead systems are controlled by localized temperature sensors and use refrigerant.

As with IRCs, the low energy draw of overhead coolers can be maintained on the UPS to provide a cooling ride-through in the event of a power failure. Both top-of-rack and center-of-row units also can be physically relocated with relative ease.

Self-cooled cabinets. The epitome of close-loop systems, self-cooled cabinets recirculate air within their own enclosures, using water or refrigerant for cooling. Although highly energy-efficient, self-cooled cabinets are large, heavy and expensive. They come into play when rooms have no other cooling option or when high-density "islands" of hardware must be cooled in an otherwise low-density server room.

Rear-door coolers. These radiators replace the normal rear doors on cabinets and cool the hot exhaust air with circulating chilled water before discharging it into the room. Rear-door coolers remove only sensible heat and require supplemental humidity control.

Rear-door coolers were originally designed to pre-cool the very high-temperature air discharged from high-performance computers, helping out conventional air conditioners. Newer rear-door coolers can satisfy the total hardware cooling requirement if they're installed on enough cabinets. In many instances, that means every cabinet.

With rear-door cooling, the air is the same temperature throughout the data center, so cabinets can be arranged in any formation without a cool or hot aisle.

Independent tests have shown that fully passive rear-door coolers (those without fans) are the most energy-efficient cooling units available.

Future options for cooling data centers

Today's source-of-heat coolers rely on mechanical systems to transfer heat to the outside air. Ideally, cold (less than 27° Centigrade) air would directly cool equipment or the circulating water -- approaches known as free cooling. In most climates, seasonal cold air would enable data centers to switch from mechanical to free cooling. The transition from mechanical to non-mechanical cooling, however, is usually the biggest challenge in a free cooling design.

As computing equipment becomes more heat-tolerant, free cooling could become the de facto data center design in virtually any climate year-round.

Direct liquid cooling, used in mainframes until about 1990, is also becoming more common for server cabinets as operating temperatures increase. Liquid cooling is used for some high-performance hardware and could gain in popularity against air-based cooling.

About the author:

Paul Korzeniowski is a freelance writer who specializes in cloud computing and data-center-related topics. He is based in Sudbury, Mass., and can be reached at paulkorzen@aol.com.

This was first published in September 2013
This Content Component encountered an error

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close