Tip

Air flow management strategies for efficient data center cooling

To achieve efficient data center cooling, you must reduce your facility's air flow waste. In this tip, an expert outlines air flow management strategies and best practices that can improve data center air flow, including the installation of blanking panels, raised-floor tiles and hot-aisle/cold-aisle design.

One of the most common complaints that design engineers hear from data center owners and operators is that they need additional cooling capacity because the existing system doesn't maintain an acceptable temperature at the data equipment inlets. But in most cases, the problem isn't one of insufficient capacity, but of poor air flow management. The good news is that adopting a strategy to improve data center air flow results in two positive changes. First, by reducing the amount of air that needs to be supplied, less energy is used for data center cooling. Second, temperature distribution across cabinets is improved.

Improving air flow in a facility requires that all the air flow supplied to the data room produces effective cooling. Air flow waste should be minimized. To understand the implications of this goal, it is important to understand the basics of heat transfer.

Basic heat transfer calculation

The basic equation of heat transfer for air at sea level is Q = 1.085 x ∆T x CFM.

Q is the amount of heat transfer.
1.085 is a constant that incorporates the specific heat and density of air (at sea level and 1 atmosphere).
∆T is the rise of temperature of the air.
CFM is the air flow (cubic feet per minute).

Computer equipment moves air through the use of internal fans to remove heat from the processors and internal circuitry. An air-handling unit (AHU) moves air with its own fan to remove the congregate heat load generated by the computer equipment (the IT load). Unfortunately, these two air flows are rarely ever the same. However, the heat transferred from the IT equipment to the AHUs is the same. The basic equations can be restated.

QAHU = 1.085 x ∆TAHU x CFMAHU

QIT = 1.085 x ∆TIT x CFMIT

QAHU = QIT

Reconfiguring and simplifying, the equation can be expressed as:
CFMAHU = CFMIT (∆TIT / ∆TAHU ) (Equation 1)

The biggest culprit leading to poor air flow conditions in data centers is bypass air flow. Bypass air flow is cold supply air that does not lead to productive cooling at the IT load. In essence, it passes around the load and mixes with warm room air before returning to the AHU. For comfort cooling applications, this mixed condition is not only acceptable, it's considered good practice. However, that is not the case for a data center environment. The occupants of a data center are the cabinets, and the cabinets' comfort is gauged strictly by conformance with an industry-accepted thermal envelope (or range) that applies only at the inlets of the datacom equipment. (A temperature in excess of 100°F on the backside of the server, or anywhere else on the server other than at the air inlets, is irrelevant to the server's comfort.)

Recirculation air is bypass air's partner in crime. When an insufficient amount of supply air (CFMAHU) is delivered to the equipment inside the cabinets (because the bypass component is large), the server fans pull air (CFMIT) from the most immediate source -- the warm air circulating nearby. For a fixed source of CFMAHU, the larger the proportion of the flow that goes to bypass, the larger the amount made up by recirculation air will be.

In order to guarantee that server inlet temperatures don't exceed the maximum recommended temperature, the most immediate solution may seem to be to use colder supply air. But since that doesn't change the proportion of air going to bypass, some servers will still be subjected to recirculation air, which could put the servers at risk. The users of the space will conclude that if there are hot spots, there is insufficient cooling available. One could lower the AHU supply air temperature even further until the hot-aisle temperatures fall under the maximum recommended temperature of the servers. With this approach, recirculation wouldn't appear to pose a significant problem since the recirculated air is still low enough for the servers. However, this approach is wasteful because it forces the supply air temperatures down to the mid 50s. (Isn't this the way data centers used to operate?) At these low supply air temperatures, the plant operates less efficiently, the AHU coil dehumidifies (forcing the system to add moisture back to the space to maintain a minimum space dew point), and the hours of outdoor air cooling are severely reduced.

The other solution would seem to be to add more air. That approach doesn't work either. Looking at Equation 1 above, one can see that increasing CFMAHU must decrease ∆TAHU. Keep in mind that QIT doesn't change regardless of what happens with the air flow. QAHU is the load cooled by the sum total of all AHUs, regardless of how many AHUs are available, and will always equal QIT. The only thing that is achieved by increasing CFMAHU is that the cold supply air will eventually reach the tops of the cabinets, and presumably the warmest server inlets, by the brute force approach. But at what cost? How much bypass air must result in order to address those hot spots? How can one tell that this is happening in his/her data center? The answer is straightforward -- by looking at the ∆TAHU at all the AHUs. If the average ∆TAHU is half of the average ∆TIT, then the AHUs are pushing twice as much air as the server fans need.

Containment is one approach that completely eliminates bypass and recirculation air. By closing off the hot and cold aisles (or ducting the hot return air out of the cabinets), the air flow dynamics within the data center are forced such that CFMAHU = CFMIT. This in turn forces ∆TAHU to equal ∆TIT.

Why, then, don't all data centers use containment? Some users don't like that containment restricts access to the cabinets, cable trays or aisles. A less obvious problem is that containment requires a carefully planned control strategy to prevent excessive pressure differences between hot and cold aisles. If the pressurization control strategy is wrong, the server fans could starve for air, which could cause them to increase speed in order to maintain acceptable processor temperatures. The result is that the servers will increase the CFMIT to the max amount, and the servers' energy consumption would increase.

Anecdotally, it appears that few data centers operating at less than 200 watts per square foot use containment. The simple truth is that with good air flow management strategies, the result of bypass and recirculation air flows can be mitigated. The remainder of this article will address these strategies as they relate to non-contained spaces.

Data center air flow control best practices

Create hot and cold aisles. The most obvious air flow management strategy is to separate hot and cold air streams by arranging all the cabinets in parallel rows with the inlet sides of the servers facing each other across an aisle (this forms a cold aisle). This is the first step toward preventing a well-mixed thermal environment. Closing gaps between adjacent cabinets within each lineup also helps to reduce bypass and recirculation air flows.

Install blanking panels in all open slots within each cabinet. It's easy to forget that bypass and recirculation can occur inside cabinets. An air flow management system cannot effectively cool the equipment in a cabinet without eliminating internal paths of bypass and recirculation. Blanking panels reduce these air flows and are considered a must for proper air flow inside a cabinet. Recognizing that blanking panels are frequently removed and not replaced during installation or removal of hardware within a cabinet, it would make sense for the IT staff to populate equipment from the bottom of the cabinet up, making sure there are no gaps between servers. In this manner, internal recirculation can be minimized.

Place perforated tiles in cold aisles only. Placing perforated tiles, or perfs, in any location but cold aisles increases bypass. There is never a justification for placing perforated tiles in hot aisles unless it's a maintenance tile. A maintenance tile can be carried to where work is being done in a hot aisle. An IT employee can work in a hot aisle, standing on the tile in relative comfort, but the tile should not be left in the hot aisle permanently.

Use air restrictors to close unprotected openings at cable cutouts. A single unprotected opening of approximately 12" x 6" can bypass enough air to reduce the system cooling capacity by 1 KW of cabinet load. When each cabinet has a cable cutout, a large proportion of the cooling capacity is lost to bypass.

Seal gaps between raised floors and walls, columns and other structural members. Sealing the spaces between the raised floors and room walls is a no-brainer. Those gaps are easily identified by a simple visual inspection. A more subtle form of bypass can be found when column walls are not finished above the ceiling and below the floor. Often, the sheet rock used to enclose a column forms a chase for direct bypass of air into the return air stream. These chases must be sealed to reduce bypass air flow.

Use appropriate selection of tiles. Frequently, users address air shortage and hot spots by installing high-capacity grates in the floor near the hot spots. Grates typically pass three times more air than perfs at a given pressure difference. Although placing grates at the hot spots may seem like it solves the problem, it actually makes the problem worse. When the grates are installed in a raised-floor environment dominated by perfs, and that under-floor space is maintained at a fixed pressure, the output of the grate is such that the air will blow off the top of the aisle with very little capture at the cabinets. A typical grate will pass 1,500 CFMAHU at 0.03" (a typical under-floor pressure for perfs). Most of that air, with a capacity to cool up to 10 kW, will be bypassed, forcing the user of the space to run more AHU capacity and lowering the ∆TAHU.

It's important to decide the width of each cold aisle early in the data room planning process since the aisle's width determines the amount of cooling that can be delivered to it. If perfs will be used, all the cold aisles that share the under-floor plenum should be supplied with perfs. If the space will be subjected to higher loads, grates should be used in all cold aisles that share the same under-floor plenum. In addition, that under-floor plenum pressure should be reduced to approximately half of what is typically used for perfs in order to avoid the bypass air associated with the air blowing off the top of the cold aisle.

Manage the placement of perforated tiles by cold aisle. Calculate the load by cold aisle and place an appropriate number of perfs or grates (but not perfs and grates) to cool the load in that aisle. Placing too few tiles in the cold aisle will cause recirculation. Placing too many will increase the amount of bypass. If one needs to choose between a little recirculation and a little bypass, the latter is always the better deal.

The user of the space must keep track of the load by cold aisle. When the cold-aisle loads change, the number of tiles must be adjusted accordingly.

There are many factors involved in determining the optimum amount of bypass. Without these best practices to reduce bypass and recirculation air flows, the amount of bypass could be such that CFMAHU is 50% to 100% larger than CFMIT. With the best practices presented here, it may be possible to achieve a disparity of 25% or less.

About the author:
Vali Sorell has 25 years of experience as an HVAC design engineer. He is one of the lead technical resources for mechanical design in Syska Hennessy Group's Critical Facilities Group. Through industry publications and speaking engagements, Sorell has become a leader in updating best practices and advancing the principles of sustainable design for critical facility work. He is also a voting member of ASHRAE TC-9.9 "Mission Critical Facilities, Technology Spaces & Electronic Equipment," and serves as the TC-9.9 Program Subcommittee Chair.

What did you think of this feature? Write to SearchDataCenter.com's Matt Stansberry about your data center concerns at  [email protected].

Dig Deeper on Data center design and facilities

SearchWindowsServer
Cloud Computing
Storage
Sustainability and ESG
Close