Essential Guide

Modern data center strategy: Design, hardware and IT's changing role

A comprehensive collection of articles, videos and more, hand-picked by our editors

New data center cooling strategies to improve efficiency, lower costs

Stagnant strategies for data center cooling will keep energy bills climbing ever higher, but a more modern approach can bring them back down to earth.

What are your data center cooling strategies? Do you run a few computer-room air conditioning units continuously to push cooled air through perforated plates in your raised floor to maintain a suitable temperature for all the racks in the room? If so, you are likely wasting large amounts of energy -- and money. There are several new cooling approaches and technologies that could give a short capital payback coupled with ongoing financial...

benefits.

TechWatchFirst, make sure you're not running the data center temperature too low. The American Society of Heating, Refrigeration and Air conditioning Engineers (ASHRAE) has been tracking recommended and allowable temperatures for defined types of data centers since 2004.

The maximum high-end temperature in 2004 was set at 77 degrees Fahrenheit (25 degrees Celsius). By 2008, it had risen to 81 F (27 C). By 2011, ASHRAE had created a range of data center types, and although the recommended temperature stayed at 81 F (27 C), it raised the maximum allowable temperatures – the temperatures at which increased equipment failure rates may be seen -- to 113 F (45 C). By matching need against risk, a higher-temperature data center requires far less cooling; this leads to fewer computer room air conditioning (CRAC) units and less energy used.

The easiest way to save money is to reduce the number of running CRAC units. If half the amount of cooling is required, turning off half the CRAC units will give a direct saving in energy costs -- and in maintenance costs. Using variable-speed instead of fixed-speed CRAC units is another way to accomplish this, where the units run only at the speed required to maintain the desired temperature. The units run at their most effective levels only when they run at 100%, and some variable speed systems don't run at a fully optimized rate when operating at partial load.

Running standard, fixed-rate CRAC units in such a way as to build up "thermal inertia" can be cost- effective. Here, the data center is cooled considerably below the target temperature by running the units, and then they are turned off. The data center then is allowed to warm up until it reaches a defined point, and the CRAC units are turned back on. Through this process, the units are run at full load and are operating at their highest operational efficiency. When they are turned off, they are running very energy-efficiently -- there is no draw of energy at all.

Nevertheless, straightforward volumetric approaches to cooling a data center remain wasteful, no matter how the initial cooling of the air is carried out; the majority of the air being cooled won't come into close enough contact with any IT equipment to cause any effective cooling.

Cooling diagram 1
Figure 1. Hot/Cold Aisle.

Keeping cool with air

The use of hot aisles and cold aisles requires less volume of cooling air if they're set up properly (see Figure 1).

The spaces between facing racks are enclosed, with a "roof" placed over the racks and doors at either end. Cold air is blown into the enclosed space. Blanking plates are used to prevent cold air leaking from within the racks. Ducts direct cold air to the hottest parts of the equipment.

The hot air either vents into the data center, then to the external air, or it is collected and used for heating other spaces. The hot air can also be used for heating water via a heat pump. These systems can be highly engineered, or they can be implemented quite effectively through a home-grown approach using polycarbonate to cover the racks as a roof and polypropylene sheeting as the doors at each end.

The goal is to make each rack its own contained system, so that the volume of air that requires cooling is minimized even further. Cooling can be engineered and targeted even more. This is where things like Chatsworth Towers come in. These self-contained systems have a 19-inch rack inside, but take cooling air from bottom to top without the air touching the rest of the air in the data center.

In certain climates, running at higher temperatures could allow the option of using free air cooling without the need for CRAC units. For example, if the choice is to run a data center at 86 F (30 C), an external air temperature of lower than 77 F (25 C) might be cool enough to require no additional cooling -- as long as moisture levels remain within required limits.

Cooling diagram 2
Figure 2. The Kyoto Wheel.

However, the basic approach of simply ducting external air can lead to inefficiencies, such as thermal hot spots or dust and contaminants getting into the data center. This is where new designs, such as the Kyoto Wheel, come into play (see Figure 2).

In this scenario, a wound, corrugated metal wheel approximately 10 feet (3 meters) in diameter rotates slowly through a two-compartment space. The hot air from the data center flows through one space, transferring its heat to the metal in the wheel. Cold external air flows through the other and takes the heat out of the metal and exhausts it to the outside air. The cooled air from the data center is fed back to the data center for use in cooling the equipment.

The data center loop is enclosed, and the small volume of air that gets transferred as the wheel rotates ensures that only very small amounts of particulates or moisture are mixed from one compartment to the other, with the wheel itself acting partially as a filter.

The benefit here is that the low-speed fans and motors used by the Kyoto cooling method require little maintenance, and the overall system runs with very small amounts of energy, often from solar power and a battery backup. Such a system can last for many years -- it is expected that 25 years will be a low-end lifetime -- and maintenance can involve just a quick clean of the wheel every few months, along with general motor maintenance.

Cooling diagram 3
Figure 3. Water cooling methods.

When there's no chill in the air, try water

Aside from these air-based methods, another approach is adiabatic cooling, using the cooling effect of water as it evaporates (see Figure 3).

Water cooling is effective in warmer climates, where direct environmental heat can be used against wet filters to evaporate the water and cool the air being pulled through the filters. This is a two-chamber system, with the filters providing the break between the outside air and the internal data center. However, the filters need to be changed on a regular basis to remove particulate contaminants. In addition, the air's moisture content may need to be adjusted to prevent condensation on IT equipment.

For companies that want to run at extreme equipment densities with high thermal profiles in warm climates, direct water cooling could be one way of solving the problem. IBM has used water cooling in the past, but has managed to advance the method to extraordinary levels in its Aquasar and Liebniz SuperMUC supercomputer systems. The system gets around the problems that come with mixing water and electricity in a data center: Negative pressure is used to suck the water around the system, instead of pumps being used to push it. Therefore, if a leak occurs, air is pulled into the system instead of water coming out into the data center. Advanced sensors are used to identify rapidly where a leak has occurred, and modular construction allows for repairs while the rest of the system continues to run.

The system uses a hot water inlet for the cooling liquid, which might seem strange. But in a highly targeted system, water at a temperature more than 86 F (30 C) can cool CPUs to within operating parameters; the outlet's water temperature can be around 113 F (45 C). High-temperature water coupled with heat exchangers makes the hot water available in other parts of the building. Besides lowering energy usage by around 40%, this system can lead to further savings in the energy used to heat water for the rest of the building.

Give hardware a cooling bath

For even more thorough liquid cooling, there are fully immersive systems. Companies such as Iceotope and Green Revolution Cooling provide systems that cover the whole server -- or other pieces of IT equipment -- with a non-electricity-conducting but high-heat-conducting liquid to take heat away from any component in the equipment. These systems are ideal for GPU [graphics processing unit] servers with hundreds of cores running in high-density configurations, or for massive computing densities with a high density of hot-running CPUs, and they can deal with more than 100 kW per bath -- which is essentially an enclosed rack on its side. Some immersive systems are being run at a liquid temperature of 140 F (60 C).

Fans are unnecessary in an immersive system, which saves additional energy. Because the liquids used are far better at removing heat than air or water, hardware can run at higher temperatures and allow heat recovery from the liquid to provide heat for the rest of the building.

The data center is cooled, but now what?

These systems provide the main ways of providing cooling based around different needs and differing environmental conditions. Monitoring must be in place to supervise all thermal aspects of the data center.

This is where data center infrastructure management (DCIM) comes in. The use of thermal sensors and infrared detectors can build a map of hot spots. Computational fluid dynamics enables what-if? scenarios to see how new cooling flows and how different inlet temperatures will work with different systems.

Once the DCIM is in place, continued monitoring ensures hot spots are rapidly identified, which allows systems to be slowed down, turned off and replaced as necessary. In many cases, the sudden appearance of a hot spot indicates an imminent failure in the equipment. Being able to pick this up before it happens means that systems uptime and availability stay high.

The world of the data center continues to change, and taking an old-world view of cooling is one guaranteed way to waste money. Combining newer guidelines with newer approaches results in more effective cooling for much less in capital, running and maintenance costs.

About the author:
Clive Longbottom is the co-founder and service director at Quocirca, and has been an ITC industry analyst for more than 15 years. Trained as a chemical engineer, he worked on anti-cancer drugs, car catalysts and fuel cells before moving into IT. He has worked on many office automation projects, as well as projects dealing with the control of substances hazardous to health, document management and knowledge management.

This was first published in May 2013

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Essential Guide

Modern data center strategy: Design, hardware and IT's changing role
Related Discussions

Clive Longbottom asks:

Which of these data center cooling strategies works for you?

0  Responses So Far

Join the Discussion

7 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close