Trends in data centers: Adiabatic cooling, DP humidity control and more

Could your data center benefit from free cooling or intelligent power strips, LED lighting or SSD storage? Let's parse the latest data center trends.

While I'm reluctant to call any trends in data centers today fads -- everything has a potential value someplace -- certain technologies are over-hyped, some are proven and others are still shaking out.

Tech Watch logo

A great deal of data center improvements focus on cooling and humidity control, many on power, some on workload management and performance, and still others on the data center design and layout. Let's start with the copious cooling and humidity control options available in today's data centers:

Most of these focus on conserving energy use through higher temperatures, ambient air and targeting cooled air rather than blast-chilling the entire data center.

Close-coupled or source-of-heat cooling

The closer cooling moves to the heat source, the more effectively and efficiently it works. Which is nothing new -- just ask the old mainframe operators or any laptop designer. While close-coupled cooling is on the verge of "mainstream" for data centers, newer approaches garner interest with aggressive energy efficiency demands. It simply takes too much energy to push large volumes of air through floor plenums or ducts then pull it back to air conditioners.

Promising technologies include immersion cooling, which submerges servers in mineral oil for extremely efficient cooling at minimal energy use. But what do technicians think of working on servers covered inside and out with oil? It won't be right for every shop.

Overhead cooling, over a decade old, is undergoing an evolution, particularly in-row. We'll continue to improve in-row cooling system designs and implementations until direct liquid cooling takes over.

Direct liquid cooling went out of style for a while but is resurgent. Water is 3,500 times more efficient than air in removing heat; liquid cooling is pretty inevitable as more powerful processors hit the market. What's old is new again, so it's just a matter of accepting water in our data centers.

The rear door cooler is also seeing greater acceptance, partly because of the renewed interest in water cooling.

The data center industry would be much better off if the big computer room air conditioners go away, replaced by near exclusive use of newer cooling methods. Education and a willingness to do something differently can make it happen, but the cost and availability of power will ultimately mandate change.

Higher-temperature operation

The American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) first published about higher-temperature data centers in 2008, but the message is still unheeded. Servers don't need to be refrigerated. They run very well with inlet air temperatures in the 75 degrees Fahrenheit to 80°F range (25° Celsius to 27°C). Server manufacturers actually developed the expanded thermal envelope and agreed that it would apply to legacy equipment as well as new. The energy savings can be spectacular, but people first need to accept that this works, then accept that the hot aisle is going to be a lot hotter -- think 100°F (38°C). This encourages using rear door coolers.

But that's just the beginning. The 2011 ASHRAE guidelines allow inlet temperatures up to 104°F (40°C) only for equipment that is just now in development in 2013. We won't see these higher classes of equipment in our data centers for a long time, if ever. And when we do, they are likely to be water-cooled.

Free cooling

Higher temperature operation usually goes hand in hand with free cooling retrofits. Free cooling is now virtually mandated by the requirements of ASHRAE 90.1 - 2010 to save energy and should become nearly universal. The capital expense of energy-conscious retrofits should be at least partially offset by operating cost savings.

But implementing free cooling in renovations or upgrades can be challenging and a financial investment. A new standard should make this more realistic in a couple of years. Water-side free cooling will probably predominate, but we should see more air-side installations, as with the Kyoto Wheel method. Ultimately, data center operators will adopt free cooling in more climates than previously thought possible, thanks in part to higher operating temperatures.

Evaporative or adiabatic cooling

While evaporation for cooling is real science, it is still a novelty for most data center operators and is catching on slowly. Adiabatic cooling decreases the pressure on a substance as it works within an environment, and it occurs in nature as magma rises to the surface of volcanoes and when winds move over mountain peaks.

Adiabatic cooling proves effective in warm, dry climates, greatly extending the number of "free cooling" hours. Its major drawback is water usage, but it still consumes less water for the same amount of cooling than a standard cooling tower.

Containment cooling

Do not build a new data center without some form of containment cooling. It is one of the easiest and most effective ways to improve cooling effectiveness and energy efficiency ever seen in the data center. Many existing data centers also benefit significantly from a containment retrofit.

While containment is impressive, it has been over-hyped as a "be all and end all" solution to every cooling problem, and vendors argue over whether hot aisle or cool aisle containment is better. None of this has helped the end user; both hot aisle and cool aisle solutions are effective if designed and implemented correctly. The choice depends as much on the physical data center as on preference.

Containment will not solve over-heating problems caused by improper cooling designs or insufficient air flow or cooling capacity. The newest National Fire Protection Association tandards (NFPA-75) can make containment cooling more difficult to implement. Revamping sprinkler and/or gas fire suppression systems adds significant costs. Avoid poor practices; be sure to install blanking panels in unused rack spaces, block holes in raised floors, and clean cable blockages out of the under-floor space.

Chimney cabinets and ceiling plenums

Using the ceiling plenum to convey return air to the computer room air conditioners significantly increases CRAC cooling capacity by ensuring the highest achievable return air temperature to the cooling coils.

Combine ceiling plenums with hot aisle containment, of which chimney cabinets are the ultimate form, for the biggest benefit. The hot exhaust air from the servers is exhausted from the backs of the cabinets through chimneys on the tops of the cabinets, directly into the ceiling plenum and back to the air conditioners. Hot and cool air never mix, so it is highly energy efficient.

As effective as they are, however, chimney cabinets have not been heavily promoted or widely accepted. This may be because full containment designs allow more cabinet flexibility and accomplish nearly the same thing. Chimney cabinets, however, keep the whole room at the cool aisle temperature, which may be more comfortable to work in.

Dew point humidity control

Far from a fad, controlling humidity via dew point (DP) rather than relative humidity (RH) has been recommended by ASHRAE TC 9.9 since 2008. But the vast majority of data centers still follow RH rules. Users probably can't explain RH any better than they can explain DP, but they are more familiar with RH and with the numbers that have been historically used: 45% to 50% RH.

More data center trends

These aren't the only fads in data centers today. Check out the rest of this series, which covers data center trends in cooling, power, design and performance.

Hot and not hot design fads

Powerful fads in power design

Workload optimization trends

But RH is meaningless in today's high-density data centers, where temperatures vary widely. Dew point temperature, on the other hand, is essentially the same throughout the room, therefore a more reliable metric to regulate humidity. And since so many source-of-heat cooling devices provide no humidity control but must maintain cooling above the dew point to avoid condensation, DP control is really the only way to go. So I sincerely hope it becomes a data center trend, with everyone wanting to use dew point as the latest style of energy efficiency.

Nonintegrated humidification

Along with DP humidity control comes greater opportunity to humidify the room separately from the CRACs. Depending on the design, nonintegrated humidification can bring a lot of energy savings. Far from a craze today, this kind of humidity control is in the definite minority of designs.

Intelligent, communicating cooling systems

Automatic cross-coupling and computer control of all the different cooling and humidifying devices in the room is emerging and growing more sophisticated. The technology is so far limited to certain products of individual manufacturers. Effective, energy efficient cooling is becoming so complicated, with the wide variety of cooling product types and computing devices, that having them self-monitor and self-regulate is the only way to maximize operation. If you choose systems with this kind of control, expect to benefit automatically. Universal controls would cross manufacturers' product lines, and that should become the norm in the future, simply because it is so logical.

About the author:
Robert McFarlane is a principal in charge of data center design at Shen Milsom and Wilke LLC, with more than 35 years of experience. An expert in data center power and cooling, he helped pioneer building cable design and is a corresponding member of ASHRAE TC9.9. McFarlane also teaches at Marist College's Institute for Data Center Professionals.

This was first published in October 2013

Dig deeper on Data center cooling

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

4 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close