How can I avoid overheating a 42-unit rack that's full of 1-unit servers?

Expert Robert Macfarlane discusses strategies for dealing with overheating servers in a 42U rack.

Most data centers were never designed for the kinds of heat densities that full cabinets of small form factor servers (1U and 2U) create. Overheating servers in racks can occur because there isn't enough of available cooling going to the cabinets, because there isn't enough cooling to support the amount of heat being generated in the room, or both.

The first thing is to get the most out of the air conditioning you already have. Read my article "Block those holes!" to get started. If overheating persists after following the advice in that column, then there isn't enough cooling capacity, the air delivery system is improperly designed or there isn't enough air getting to the equipment. This would be a good time to make sure you're not simply blocking under-floor air with cables, and that your air conditioners have been properly serviced and are working properly.

Next, space out the servers. It's unlikely that putting 1U between boxes will help except to simply reduce the total load in each cabinet. This is not what is meant by spacing. Hot air rises, so the intake air will be warmer by the time it reaches the top of the cabinet than it was when it came out of the floor. And some of the hot air from the back "hot aisles" will inevitably go over the tops of the cabinets, re-entering the upper servers. As a general rule, load the cabinets from the bottom up, starting about four or five U's from the bottom with a blank panel. And block the remaining space with blank panels as well.

"Spacing" means leaving two or more lightly-loaded cabinets between the higher loaded ones. In legacy data centers, we usually suggest no more than fifteen to twenty 1U servers in a cabinet as a "rule of thumb." But this is totally dependent on the power demand of each server, and the air delivery available. I would suggest starting with the problem cabinets only half full and seeing if there's enough improvement. Also make sure that cables are not blocking the rear exhausts of the servers.

If this doesn't help enough, and you've taken all the steps to get the most out of the air conditioning you have, you may need more cooling capacity. There are a number of new solutions on the market with more coming. But this is an expensive and potentially disruptive undertaking. Read my articles "Let's add an air conditioner" and "Cabinets, bloody cabinets!".

In general, I would probably not recommend simply adding another CRAC Unit. Rarely can it be put where it will really solve the problem, and often it will actually create new ones. Look at localized overhead cooling, in-row cooling, or even self-cooled cabinets. These all require infrastructure of some kind. But if properly selected, designed and used, any of these will likely be more cost-effective and flexible than rolling in another big air conditioner and enduring the disruption of welding and soldering in your data center.

There are computerized analysis tools - Computational Fluid Dynamics (CFD) - that, in the hands of an expert, can pinpoint problems and help in developing solutions before a lot of money is invested.

ABOUT THE AUTHOR: Robert McFarlane is a pioneer in the field of building cabling design. He has been asked to speak at countless seminars on building infrastructure for electronic communications, evolving technologies and the requirements of trading floor and data center design. Mr. McFarlane served for twelve years as President of Interport Financial, Inc., a firm specializing in designs for financial trading floors and critical data centers.

Dig Deeper on Server hardware strategy

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close