Data center cooling and power capacity challenges

Administrators face many complications when it comes to data center cooling, but knowing which approach best fits their facility’s needs is the first step to better energy efficiency.

This Content Component encountered an error

The challenges of data center cooling are enormous. Adequate and efficient cooling is the problem most often cited in surveys, followed closely by power. Never the less, many newer data centers don't cool much more efficiently than legacy facilities. There have been volumes written on both the reasons and the solutions behind cooling problems. This tip offers a reminder of what's important in the assessment of designs for new facilities,...

as well as remedies for existing design issues.

Data center cooling: Back to basics
Although Robert Sullivan (aka Dr. Bob) established the hot- and cold-aisle concept many years ago, it still isn’t practiced in all data centers. So I’ll say it once again: Hot and cold air separation is the starting point. Adding more cooling is both wasteful and largely ineffective if you haven’t first employed a containment design.

Equally surprising are all the open rack spaces and unblocked air paths in cabinets and raised floors. Consultants call this “low-hanging fruit,” meaning it’s easy to make a big difference fast by addressing these issues. You can gain a lot, in both data center cooling effectiveness and energy efficiency, and save some expensive consulting fees, by simply closing every unblocked opening. Yet, we find this still hasn’t been done in the vast majority of data centers.

High-density data center cooling
As important as it is to block unused rack spaces and practice hot- and cold-aisle separation, those tactics still won’t solve everything. The big challenge is high-density heat loads. You just can’t cool cabinets of 15,000 W or more with only conventional under-floor air. Containment will help, but the most effective and efficient way to cool big loads is to get the cooling close to the source of the heat. Various forms of in-row coolers, above-cabinet coolers, and rear-door coolers are available. We are probably going to see more direct liquid-cooled servers (either water or refrigerant) in the near future. The key is knowing which data center cooling approach, or combination of approaches, fits your situation, budget and growth expectation. This really should be assessed with experienced, professional help, but consider the following as a quick and simple starting guideline:

  • In-row coolers provide closely-coupled airflow to the full heights of cabinets. They also pull most of the hot exhaust air back into the cabinets before it can get over the tops or around the ends of cabinets. But, in-row coolers take up floor space and, despite the supposed flexibility, are rarely moved once they are positioned. They are available in different cooling capacities, in both chilled water and “compressor” versions. Some have humidification options and some do not, so an auxiliary form of humidity control may be necessary. Choosing which type of cooler to use, and properly locating them in cabinet rows, requires knowledge and experience.
  • Overhead (above-cabinet) coolers require room height, but don’t usurp floor space. Different types of cooling units may go in the cold aisle, the hot aisle, or on tops of cabinets, depending on your containment approach. They circulate refrigerant rather than water, which appeals to many people because–when properly configured--refrigerant can be more efficient and safer for equipment than water. However, refrigerant is not as simple to connect and disconnect as water, so some units, depending on whether they use water or refrigerant, can be moved around more easily than others. This must be considered if true flexibility is important. Overhead (and some in-row) coolers are also so energy efficient they can be run on uninterruptable power supplies (UPSes) to achieve “cooling ride-through,” with the UPS continuing to run the cooler through short power disruptions. This capability can be critical with very high-density servers. Some refrigerant-type coolers have minimum heat load requirements for operation, so they need to be chosen carefully. Most in-row or overhead coolers may remove too much humidity from the air, so another means of humidity management is necessary. That is usually accomplished with conventional perimeter air conditioners that also provide base cooling.
  • Rear-door coolers are a completely different approach. Passive rear-door coolers (no fans) have been shown in independent tests to be even more energy-efficient than in-row or overhead devices. Most passive rear-door coolers use chilled water and therefore require an extensive piping network under the floor or overhead. They can be added to standard cabinets, but using them in a data center that also has cabinets without rear-door coolers raises a challenge. The purpose of rear-door coolers is to exhaust cool air. But conventional air conditioners work best with higher return air temperatures, so this creates an anomalous situation.  Rear-door coolers have no humidity control, and are generally set to maintain four degrees Fahrenheit above dew point to avoid condensation. Therefore, they require conventional air conditioners for humidity control and to cool racks and cabinets that don’t justify rear-door coolers. A design incorporating rear-door coolers is the antithesis of normal best practice today, in that recirculation over cabinet tops and around row ends is necessary. If the entire room is cooled with rear-door coolers, it is even possible to use legacy front-to-back cabinet arrangements instead of hot- and cold-aisle designs.

Power is the culprit
All this heat comes from power, and for some, that’s as big a problem as data center cooling. A shortage of kilowatts mandates a thoughtful balancing act; if you use too much power to run the computing equipment, you won’t have enough left to cool it. So the first step is to reduce equipment power consumption, which will also reduce the cooling power required. Turn off unused or underused servers (and see if anyone yells) and enable energy-saving features. You may even have unnecessary air conditioners. In one case, a new installation was out of space and power, had air conditioners all around the room, but still had hot spots. For this scenario, cutting back to two computer room air conditioners actually improved cooling. Removing all but four left 2N redundancy, reclaimed space and yielded extra power. Problem solved!

Add UPS capacity the right way
If you have plenty of utility power but too little UPS capacity, there are good and bad ways to fix that too. The bad way is to put in another big UPS, either in addition to the existing UPS or as a replacement for it. Another bad approach is to add small UPS units to cabinets all over the room. This is a reliability and battery maintenance nightmare. Today, we have ways of adding UPS capacity incrementally. Some systems are modular, with smaller plug-ins that administrators can add as the need for capacity grows. Others enable the additional capacity via firmware. Either way, the goal is two-fold: to pay only for the capacity you need, when you need it, and to “right size” the UPS to maximize energy efficiency. Another advantage of incremental solutions is the ability to have an N+1 configuration for a little more money than an N design, providing a level of redundancy you may not have been able to previously cost-justify or find space for.

Verify actual UPS usage
Before upgrading a UPS, administrators should first determine if they are really out of capacity. Check the loads for phase balance. Unless all three power phases are reading close to the same power draws, there is still unused UPS capacity, even if the UPS says it is at 98%. An administrator would need to re-plug loads, and potentially add some branch circuits,but it will still be easier and cheaper than purchasing a new UPS. Putting “smart” power strips in cabinets could help as well. These sophisticated power strips (unfortunately called power distribution units [PDUs] by many people, which confuses them with the large, true PDUs we have known for decades) have local power draw displays, as well as remote power readouts via the network. This makes it much easier to find and correct phase imbalances.

In short, don’t just add UPS or air conditioning capacity without knowing what’s really needed. And be suspicious of legacy solutions proposed for modern problems. You shouldn’t open a paint can with a steak knife. There’s a tool for every job, as well as a right and wrong way to use it.

About the author: Robert McFarlane is a principal in charge of data center design for the international consulting firm Shen Milsom &Wilke LLC. McFarlane has spent more than 35 years in communications consulting, has experience in every segment of the data center industry and was a pioneer in developing the field of building cable design. McFarlane also teaches the data center facilities course in the Marist College Institute for Data Center Professional  program, is a data center power and cooling expert, is widely published, speaks at many industry seminars and is a corresponding member of ASHRAE TC9.9 which publishes a wide range of industry guidelines.

This was first published in September 2011

Dig deeper on Data center cooling

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close