News Stay informed about the latest enterprise technology news and product updates.

McFarlane's cabinet fundamentals

This column originally appeared on TechTarget's Expert Answer Center as a post in Robert McFarlane's blog. Robert served as the on-demand expert on the Expert Answer Center for two weeks in October to November 2005, during which he was available to quickly answer questions on data center design as well as to write daily blog entries. Keep an eye on the Expert Answer Center for topics that could help your IT shop.


It's hard to believe there could be so many companies making so many different racks and cabinets, or that there...

could be so many different ways touted to address the two main concerns facing everyone today: equipment cooling and cable management. Simply because there are so very many, we will avoid referring to any by name or including links to Web sites. It would be too long a list, and everyone we miss would object to the omission. You can use search engines as well as we can.

Instead we will dwell on some fundamentals and what our firm looks for and evaluates in recommending, specifying and selecting racks and cabinets for our clients' data centers. (Racks and cabinets for IDF rooms and other purposes like audio/visual equipment have different considerations and will not be addressed here.)

First let's define our terms. A "rack" is either a two-post or four-post open-frame mounting. A "cabinet" is always a four-post concept, but with options for side panels, top panels and front and rear doors making full enclosure possible. If a four-post cabinet frame is put on the floor without enclosure panels or doors, by our definition it's a "rack." Conversely, if a product sold as a "rack" has components available to enclose it, we will regard it as a "cabinet" if those components are installed. Both racks and cabinets have EIA-standard (Electronics Industries Association) hole spacing in the vertical mounting rails, which may be in the front, or in both front and rear. As a general rule, "rack rails" are fixed, whereas "cabinet rails" are usually adjustable forward and backward within the structural frame to accommodate different mounting depths. Rail holes are usually one of two types: tapped or square-cut for use with snap-in cage nuts. Rail spacing is meant for either 19" or 23" wide EIA standard panel widths, and herein lies one of today's more interesting and obscure concerns.

ANSI/EIA Standard RS-310 specifies the minimum clear width between rails as 450 mm (17.7165"). However, there is hardware on the market today from at least two major-name vendors that specifies a clear rail width of 17.75" (450.85 mm) or more. This may not seem like much, but we've seen more than 200 cabinets in a single data center that met the specs, but the servers still wouldn't fit in. And even if a cabinet bows slightly (as most metal will tend to do) it may provide adequate spacing at the top and/or bottom but not in the middle, or vice-versa. In short, verify the mounting clearance requirements of your hardware, and specify your racks or cabinets accordingly if special tolerance requirements must be met. Otherwise the manufacturer is well within his rights to charge you for field adjustments if he can demonstrate that his mounting rails meet the published standard, which is what will naturally be assumed if you didn't specify differently.

From this point on, we'll concentrate on cabinets, since those are what we use today to mount most of our server hardware. Let's examine several of the key factors that relate to cable management and cooling concerns.

Size: This a place where size does count! Servers have gotten smaller vertically by growing in depth. There are now "1U" servers that are 36" deep, and that doesn't include the rear connectors. A 42" deep cabinet still leaves only 6" behind equipment 36" deep, and that's only if the front mounting rails are very close to the front of the cabinet. Thankfully, most hardware has not yet reached this depth, but 36" deep cabinets are simply too limiting for much of today's technology. Even if you can't, or don't want to, install deep cabinets everywhere or reuse existing smaller cabinets, it's wise to plan your layout to accept as many deep cabinets as reasonably possible. And since most equipment loads from the front, you'll need that much aisle depth between cabinets as well (which you should have for cooling anyway). It's very difficult to install a 36" deep server in a 36" deep walking space.

Cabinet height is a matter of available space and personal preference, but keep two things in mind. First, it's not easy to install or work on anything at the top of an 8-foot-high cabinet. Second, cooling at the top of any cabinet or cooling a fully loaded cabinet of any height is problematic. The common 7-foot (84") nominal height is the most available and is the best choice for the vast majority of situations. As to cabinet width, we will address that next.

Cable management: Sir Walter Scott didn't have our modern data centers in mind when he said, "Oh, what a tangled web we weave," but that part of the phrase sure fits today. The higher equipment density made possible by small form factor and blade servers brings with it a prodigious number of cables. Consider a cabinet with 42 "1U" servers, each dual-corded and dual-homed with two NICs, each with monitor connections to a KVM switch and with a fiber connection to a SAN. This might not be the norm, but there are plenty of cabinets with this kind of load and many more that come close. That's five UTP cables, two power cables and one fiber pair per device, for a total of 336 wires and cables. If you're running a permanent cable infrastructure (highly recommended) from cabinet patch panels back to your network switches and SAN, that's another 210 UTPs plus a fiber bundle. You're just not going to get all this neatly into a conventional 23" wide cabinet when the standard chassis is already more than 17" wide. And if you use the space behind the servers in a deep cabinet, you'll block the hot air exhaust from the machines. (This is why those "folding cable managers" are so bad. When they ship with the machine, I figure the manufacturer must know how much sooner it will cause you to buy a new server due to overheating.) Remember, the fans inside these small server chassis are, of necessity, very small. They run fine in "free air" conditions, but make them push against any kind of blockage and they slow down under the static pressure buildup. They are just not powerful enough to push the necessary volume of air past that virtually solid wall of wires.

Therefore, for any part of the data center where high densities are likely (which, by our definition, means more than 12-15 machines), we recommend wide cabinets, which generally means from 28" to 30", depending on manufacturer. (Note that the mounting rails in these cabinets are still designed for EIA standard 19" wide devices.)

But cabinet width is only part of the story. That width must be efficiently utilized in order to be worthwhile, and that means a good method of dressing the cable to each side of the equipment (cable management system) and a good way of mounting multiple power strips without blocking access to the cable. Even with the extra width, this is not so easy as it sounds. We have seen some very clever approaches to doing this, and some that show no thought at all. There are highly proprietary solutions, meaning you can't use XYZ company's "smart power strips (sometimes also called "PDUs" or "CDUs") because they won't fit or they defeat the nice concept. This is something you should look at very carefully in selecting a cabinet and should observe in real situations, not just in pictures.

Cooling: I can't help but think of Aerosmith's recording "Big Ones" -- their blockbusters. But their performance is proven. Every cabinet vendor, however, claims they have the "big hit." Their product cools better than anyone else's. We're not going to dispute them. (What? You think we want to get sued??) But just like Aerosmith's best songs, it might not be right for every occasion, so each cooling solution might work well in one situation, but not perform well in another.

Therefore, what we are going to do is give you the principles -- some things to consider -- and to unequivocal guarantee that no one, irrespective of what their marketing department dreams, is going to defy the laws of physics. It still takes a certain volume of air at some known temperature to cool a device by a given number of degrees.

The goal of every cabinet design is, or at least should be, to help deliver the necessary amount of air to every device, regardless of how high or low it's mounted in the cabinet. Every device has been designed and tested in the lab to move that air through the box in the right volume, at the right velocity and past the right components. But getting this air out of the floor and up to the full height of the cabinet in sufficient quantity and at the right temperature is not all that easy. And make no mistake: The manufacturers dump that problem right in your lap. They know their servers will work in the lab, and they'll usually tell you what it requires in some fashion or another. (Unfortunately, too few follow ASHRAE's recommendations for providing this information, but hopefully that will change in time. If we can't meet the requirements with air directly from the floor, we may need a little help, and that's where all the special air-moving cabinet accessories come in.

Today's basic server cabinet should have at least 63% open perforated front and rear doors, a closed top plate with sealable cable pass-throughs and a base frame that fits reasonably tightly to the floor. It also helps to have interior side panels to keep air channeled through each cabinet and to avoid mixing with hot air from an adjacent cabinet. At least one manufacturer offers an insertable interior panel that can be put into channels or removed, depending on need. It's a little pricey, but it's an interesting idea.

If we're getting good air flow out of our raised floor, and we still have cooling issues in the cabinet, then one of the "air booster" solutions may help. There are basically three types: bottom front blowers that pull air from the floor and discharge it upward in front of the equipment; top-of-cabinet fans that pull air through the cabinet, either from holes in the floor or through the doors, and rear door fans that pull air through the front door and through the equipment. Let's examine each of these in a little more detail.

Bottom front blowers work very well, with three important caveats:

  1. They pull a fixed volume of air out of the floor, so it's critical to make sure the CRACs can supply it and that the suction doesn't deprive something else of air.

     

  2. A fixed spacing between the front door and the equipment is necessary for this solution to work right, since improper spacing affects the pressure, and variations in the surfaces disrupt what is trying to be a somewhat laminar air flow.

     

  3. The velocity has to be high enough for air to reach the top of the cabinet, but low enough to avoid air starvation to equipment at the bottom due to pressure reductions at high velocities (see blog article #3). In short, these devices are great for improving the evenness of air distribution up the height of the cabinet, but only for about 5 kW of equipment. In other words, they're really not for "extreme density"; they're to improve moderate density situations.

Top fans have been used for years and historically were the only thing available to cool an enclosure. Today you may see some significant performance figures stated for cabinets using them. Keep one thing in mind: The goal is to deliver the air needed to go through each device, not around it. Cooing the outside of a computer does not, in and of itself, cool the inside. Thankfully, virtually all blade servers today and many other processors, come with IP-addressable internal temperature sensors so you tell what's really happening inside the case, where it counts. When cold air is being pulled all around the equipment, it's difficult to get true temperature readings with probes placed inside the cabinet because the hot air being discharged from the servers is immediately mixed with cold air pulled from the raised floor. We're not saying top-fan cabinets don't work. They certainly can. We're just suggesting that a level of awareness accompany any testing of the performance capabilities and any interpretation of "test data." If the top fans are meant to exhaust hot air from behind the servers, rather than to pull it from the floor, a baffle should be installed to prevent also pulling cold air out of the front of the cabinet.

Rear-door fans come in many types and sizes, but they all have one purpose -- to move air through the equipment more efficiently. Recall that we said cooling requires the movement of a certain volume of air at a certain temperature in order to cool anything. We're not going to change the temperature of the under-floor air. It is what it is for some very good reasons, and it will get warmer as it exits the floor tiles and moves upward in front of the cabinet. Therefore, the one way to improve cooling is to move more of it through the equipment. (Notice, we said through the equipment, not simply through the cabinet.) There are doors that simply have fans of various sizes mounted in them. There are versions that pull the air into a chamber inside the door, in some cases engineered for an air flow gradation from bottom to top so as to compensate for the temperature change between lower and upper devices. There are some with chimneys that exhaust the hot air directly into the ceiling plenum rather than into the "hot aisle" so the hot air has no chance to bypass and mix with the incoming cold air. (See blog article #2.)

Whatever version you might consider, there are two factors to keep in mind. First, make sure the fans are actually pulling the additional air through the computers, not simply around the outside of them. This not only means putting blanking panels in unused rack spaces (see blog article #1), but also means a cabinet design that blocks the space around the front rails. Second, be cautious of fans that are too powerful and simply stuck in the doors without any real engineering behind them. Recall we mentioned the tiny fans in small form factor servers. They were designed to provide the required air flow over critical components, and they can be easily overwhelmed by the large fans in the doors, particularly by fans directly behind particular devices. If this disrupts the air flow inside the server because velocities get too high, then the door fans are actually counter-productive. The only way to really know is by checking internal temperature sensors, if they exist, or watching for unusual rates of particular component failures or data errors in devices in front of large fans. It's also possible to run air velocities so high that static is actually created by the fast moving air, but if the room humidity is well controlled, this is not a likely problem with door fan cooling. We'll encounter that concern next.

Liquid-cooled cabinets have come on the market in the last several years, and there are now at least five or six manufacturers making various versions. By the time you read this, there may be even more. The concept is irrefutable -- contain the high level of cooling inside the cabinet that produces the heat, rather than dissipating part of it into the room. In practice, however, this is not so simple a matter to accomplish. As mentioned for top-fan cabinets, simply putting a server inside a refrigerator doesn't necessarily cool the insides. The cold air has to get through the device to do the job. The best liquid-cooled cabinets do accomplish this, but they also need to avoid over-cooling the equipment. There are three reasons.

  1. First is the matter of condensation. If the temperature reaches the "dew point" of the air, then the otherwise desirable water vapor, which is normally in the air in the form of humidity, suddenly becomes liquid water. You do not want this happening inside your servers. And even if the internal cabinet environment is controlled to stay above the dew point, what do you think can happen when the cabinet door is opened and the warmer, more humid air in the data center suddenly enters? Instant condensation is a definite possibility if the cabinet needs to run too cold in order to perform as advertised.

     

  2. Second, if the server manufacturer publishes specs in accordance with the ASHRAE format mentioned earlier, you will see both maximum and minimum recommended operating temperatures. You don't want to effectively freeze the computers. They weren't designed to be run cryogenically -- at least not yet.

     

  3. Third, if air velocities get too high and humidity too low, the air passing rapidly over components can actually create static buildup, as mentioned above. This is much more possible in this closed environment where both temperature and humidity are usually lower than you find in the normal data center air. (See blog article #6 on grounding.) Ask any manufacturer of liquid-cooled cabinets how they have addressed each of these potential concerns, and expect good explanations. Don't simply accept "It's not a problem."

The last major consideration with liquid-cooled cabinets is fundamental redundancy. If you're running a data center that justifies cabinets this sophisticated, you probably have a high level of redundancy in both your electrical and air conditioning systems. You certainly don't want to put some of your most powerful processors in a housing that is less reliable than the rest of your facility. Before purchasing one of these cabinets (and they are rather expensive), look carefully at not only the technical performance, but also at what happens when something quits and how the design has provided for concurrent maintainability (see blog article #5).

The one thing we have not discussed is the fact that liquid-cooled cabinets obviously require water into your data center, which makes most IT managers cringe. All we can say is "get used to it." Water cools 3,561 times as efficiently as air. As servers get more powerful and use more energy in smaller housings, it's going to take more than just cool air to keep them running. The day is coming when you're going to have some form of coolant running directly to your servers, which could well negate all of the above discussion of special cabinet solutions. That will definitely require a complex of plumbing in your data center. It's inevitable. But until that happens, and until you're ready for it, look at the other cooling solutions that are already here. Just examine them thoughtfully and suspiciously before you buy.

 


Even though Robert's stint on the Expert Answer Center is over, he is always ready to answer your questions on SearchDataCenter.com. Ask him your most pressing data center design question.


Dig Deeper on SDN and other network strategies

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close