zhu difeng - Fotolia
Just because everyone else is cooling the whole data center with massive CRACs doesn't mean you have to. In-rack and in-row cooling systems are highly efficient ways to minimize energy usage while maximizing chilling capabilities.
Computer room air conditioning (CRAC) units push chilled air into a data center and around the equipment. In most cases, cooling these vast volumes of air is very inefficient. Hot and cold aisle containment decreases that volume, but still results in a lot of excess chilling -- with the concomitant costs of powering the CRACs.
There are alternatives to a complete cooling overhaul; modern data centers adopt in-rack and in-row coolers (IRCs), also known as close-coupled cooling systems, because they're tailor-made for high densities of hot-running IT equipment and tight energy budgets.
In-rack and in-row cooling is inherently more efficient than standard CRAC systems -- the IRCs tie into the IT equipment rather than sending cooled air into empty space. Smaller fans are required due to lower volumes of chilled air; energy costs are minimized; it is easier to target air onto high-density hot components for preferential cooling; and business continuity improves, as the failure of any one single unit in the cooling environment only affects that rack or row, as opposed to the total data center. And, as the majority of these systems are modular, it is easy and cost-effective to build in degrees of resilience, leading to higher availabilities across the whole data center.
In-row and in-rack choices
At one end of the close-coupled cooling spectrum: easily retrofitted row and rack coolers. At the other end, though highly efficient: rack-integrated and direct liquid cooling choices.
Rear-door cooling systems consist of essentially a cold plate that replaces the rear of the rack. Fans push ambient air through the front of the rack and across the hot equipment. The rear-door cooler extracts the heat from the air, returning it into the data center at ambient temperature again. The heat absorbed by the liquid (generally water) in the cold plate can be recovered for use on building air or water. Rear-door cooling systems are widely available from vendors such as IBM, Airedale International Air Conditioning, Eaton-Williams and others.
Dedicated racks, another low-effort retrofit, offer cooling isolation. The rack operates just like a standard data center rack, but it is sealed on all sides as a self-contained system. Cool air is forced up through the rack from the bottom, running over the equipment before exiting through the top to a hot plenum, where the heat is vented or recovered as necessary.
Self-contained cooling racks supplied by vendors such as Chatsworth, Cruxial by Rackmount Solutions and Emerson Network Power use chilled air pushed from an under-floor plenum or have built-in chilling units at the base of the rack. As such systems are completely self-contained with inlet air filtration, they are also suitable for single-rack server rooms in "dirty" environments, such as warehouses.
Two main approaches address row-based cooling needs with some facility retrofits: top-of-row systems and in-row systems.
With a top-of-row system, small CRAC systems are mounted on top of the row of server racks, blowing cool air down. In some cases, the sinking cold air keeps the whole rack at an acceptable temperature. However, at higher densities, the air is hot by the time it reaches the lower servers in the rack. In these instances, use systems with fans, engineered ducting and baffles to direct cooled air.
A heat exchanger or vent outside the data center removes hot exhaust air from row-based setups. As this is not a contained system, air coming into the data center will still need filtration and humidity control.
In-row cooling systems work within a row of standard server racks. The units are standard rack height, making them easy to match with the row and couple tightly to the IT equipment to ensure efficient cooling. Systems from APC by Schneider Electric, Liebert by Emerson Network Power, Rittal and others are engineered to take up the smallest footprint and offer high-density cooling of up to 70 kW. Ducting and baffles ensure that the cooling air gets where it needs to go.
In-rack cooling offers the highest efficiency among air-based systems. These systems require more thought and design than a simple retrofit rear-door cooler, however. Chillers are placed directly into the rack, taking in ambient air and pushing it to the outside via some type of heat exchanger. Vendors of in-rack cooling systems include Emerson Network Power, 42U and APC by Schneider Electric.
Only direct liquid cooling, via on-chip plate-based systems (such as IBM Research's Aquasar supercomputer, or Asetek products) or immersion cooling (from vendors such as Iceotope, LiquidCool Solutions or Green Revolution Cooling), offer higher thermal efficiencies than in-rack air-based cooling.
For those looking to increase the densities of their IT equipment to today's highest levels, close-coupled cooling may be the only route outside of immersion cooling, which is a disruptive design and maintenance change. Even for those who are not yet in the high-density IT camp, IRCs offer a means of reducing energy costs while improving the power usage effectiveness score for the data center.
Consider the pros and cons of nontraditional cooling designs