Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Modern data center cooling systems cut out energy waste

Better data center cooling techniques and technologies exist, so it's up to IT to identify the correct approach for their organization.

Data center advances have led to ever-increasing energy demands, which require better cooling techniques. Limiting...

costs is also essential, so choosing the right approach will balance efficiency and expenditures with an optimal server environment.

The elevated heat levels produced in high-density cabinets have stretched the limits of conventional data center cooling systems. Some of these new, effective techniques are highly developed, while others remain in their respective infancies. All take a targeted approach to data center cooling to reduce energy use.

Containment variations

Containment is an extension of the hot aisle/cold aisle concept. The ends of rows are blocked by doors or plastic curtains to further prevent air mixing. Since cooling systems generally consume so much power, consider some form of containment to minimize waste.

Partial containment, blocking only the ends of the hot or cold aisles, is up to 80% as effective as full containment. Both methods improve cooling and energy efficiency in new and existing data centers.

Fire protection is the main concern in existing rooms. Full containment can block water dispersion or gas-based fire suppression, which is dangerous and illegal.

There are three solutions to this problem: install sprinkler and/or inert gas heads in hot and cold aisles; erect barriers that can be electrically dropped upon smoke detection; or deploy partial containment. All U.S.-based operations must conform to the NFPA-75 Fire Protection Standard, particularly with dropped barriers.

Heat wheels and adiabatic cooling

Heat wheels, also called Kyoto wheels, are large, slowly rotating devices with multiple air chambers. As the wheel turns, it brings cool outside air into the data center and removes hot air.

These sophisticated wheels act as heat exchangers during the rotational cycle and bring in only a tiny amount of outside air. They require very little energy to rotate, and are effective in most climates. Of all the forms of free cooling, heat wheels are at the top of the energy-efficiency list.

Adiabatic cooling is evaporative cooling. It is an energy-efficient way of dissipating heat. Changing water from liquid to vapor consumes heat, so if we spray water on an outdoor chamber in a warm, dry climate and the water evaporates quickly, the chamber cools down. If we simultaneously pass warm air through the inside of the chamber, the air is cooled. The amount of water used in the process is generally less than what cooling towers consume.

Source-of-heat data center cooling systems

In-row cooling (IRC) moves the computer room air conditioners (CRACs) into the cabinet rows, either at the ends or between IT cabinets. IRC units deliver air directly into the cold aisles then draw discharge air from the hot aisles directly into the rear of each unit, leaving little hot air to re-circulate -- even if there's an open path, such as in a partial containment design. Since the air paths are short, the required fan power is lower than with traditional CRACs.

Some IRCs incorporate methods of controlling air-flow direction, with high-efficiency fans on variable-speed controls to automatically match cooling to heat release. This minimizes energy use. The most common control method uses sensors attached to cabinet doors that monitor inlet air temperature and humidity.

IRCs are available in chilled water, compressorized and refrigerant-based systems. Some provide humidity control, which means they also need condensate drainpipe connections, while others only provide sensible cooling.

The biggest drawback to IRCs is space -- they take up anywhere from 12 to 30 inches in width. While the floor space requirement is usually offset by the elimination of large, perimeter CRACs, in-row units disrupt the continuity of cabinet rows.

Above-cabinet cooling units most commonly supplement conventional CRACs to deliver additional cooling directly to high-density cabinets. Since these units provide only sensible cooling, CRACs are still necessary to control humidity and cool the general lower-density cabinet space. Above-cabinet units require overhead space, as well as careful coordination with other overhead infrastructure during the data center design process.

This method is refrigerant-based. The refrigerant systems are top-of-the-scale in energy efficiency and don't devour floor space -- they reside either directly above the cabinets or in the cold aisles between cabinet rows.

Rear door heat exchangers (RDHxs) replace perforated rear doors on conventional cabinets. The heat discharged from computing equipment moves through coils in the doors, neutralizing it with cool circulating water before it escapes. This means inlet and discharge temperatures are the same. Passive RDHxs -- designed with low-pressure-drop through the door coils -- are at the top of the energy-efficiency scale.

An advantage of RDHx coolers is their performance with warm water. Legacy building cooling systems use water at 45 degrees Fahrenheit, but 55 to 60 degree F temperatures are becoming more common. Unlike most cooling units, RDHxs still perform well at elevated water temperatures.

RDHx units also attach to cabinets of virtually any size or manufacture via adapter frames. They add approximately 6 inches in cabinet depth, necessitate water piping and valves for each cabinet, and require clear-swing space for the connecting hoses so doors can open. This is challenging in raised-floor designs when hoses conflict with floor-panel stringers.

RDHx installations are never fully contained since they rely on recirculation. Therefore, redundancy is inherent in designs that use RDHx cooling.

Self-cooled cabinets can be great problem solvers, particularly when a major cooling system upgrade is not realistic, and you operate a few high-density cabinets. The cabinets are fully enclosed so that equipment heat is cooled inside the cabinet and recirculated to the equipment intakes.

These cabinets may use chilled water or refrigerant connections, or even contain their own cooling compressors, much like a big refrigerator. Self-cooled units are generally larger than other cabinets -- and expensive, but less so than a major cooling upgrade.

The biggest concern with these cabinets is cooling failure. There are cabinets with redundant, "hot swappable" cooling components, but the most common system design is an automatic door release that opens the rear door in the event of a cooling failure. This means the equipment is subject to the cooling conditions in the room, and may overheat within minutes.

Immersion cooling is a newer technique. Servers are fully immersed in a bath of non-conductive coolant, which envelops the components and dissipates heat. Solid-state drives are preferred for storage, but conventional drives can be used if sealed, or suspended above the oil. Immersion cooling eliminates 10% to 20% of the server's energy consumption, as well as the most failure-prone elements of conventional cooling systems.

The thermal inertia of the liquid keeps servers within temperature tolerance in the event of a power failure -- with no cooling power required. The only moving parts are the circulating pump, the condenser water pump and the cooling tower fans. One system can maintain cooling for 25-kW density IT equipment for a half hour after mechanical units fail. Systems are built for capacities of 100 kW or more and operate in any climate condition without a cooling plant.

The result is extreme energy efficiency (as little as 50% of an air-cooled design) and potentially lower total cost upon eliminating the cooling plant. Immersion tanks occupy about 12 square feet for 42 rack units, weighing between 2,500 and 3,000 lbs.

The main concern with these systems is the potential messiness of working on an oil-covered server, but this hasn't been a complaint of early adopters.

Direct liquid cooling is re-appearing in high-performance computing environments. Some industry experts predict it will become necessary as enterprise servers and their processors shrink and become more powerful.

These systems circulate chilled water or a refrigerant through the server to remove heat directly from the processor via a special heat sink. They circulate the liquid to a second liquid-heat exchanger in each cabinet, or sometimes all the way back to the central cooling system.

This technology presents the risk of leaks, which manufacturers take great pains to avoid with liquid lines using as few connection points as possible. Liquid cooling systems also require management of piping connections along with the power and cables in a data center.

About the author:
Robert McFarlane is a principal in charge of data center design for the international consulting firm Shen Milsom and Wilke LLC. McFarlane has spent more than 35 years in communications consulting, has experience in every segment of the data center industry and was a pioneer in developing the field of building cable design. McFarlane is one of several regular contributors to SearchDataCenter's Advisory Board, a collection of experts working in a variety of roles across the IT industry.

Next Steps

The pros and cons of "free" data center cooling methods

What you should know about data center cooling optimization

This was last published in March 2015

Dig Deeper on Data center design and facilities

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

3 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Which data center cooling option are you most likely to adopt in the next 12 months?
Cancel
Data center cooling is one of the most essential changes that our organization will be taking on in the upcoming 12 months. Because we plan to add servers, we're going to need a better cooling strategy to ensure the performance of the equipment. Our plan is to use a multi-pronged approach that combines liquid and air cooling. We'll be building walls between the server racks so that each server's air inlet faces the "cool" side. As the hot air rises to the ceiling, fans will extract it from the server room to the facilities management area where it will be used to help heat the water for the building's sanitary purposes. Excess hot air will be released into the outdoor environment.
Cancel
Another aspect is moving your DC temps to align with ASHRAE standards, which are actually higher today than even just a few years ago.
Cancel

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close