Problem solve Get help with specific problems with your technologies, process and projects.

Customize data center cooling design for increased energy efficiency

Choosing to customize your data center cooling design can have a big impact on data center energy efficiency. Rather than opting for one-size-fits-all products that consume an unnecessary amount of power, look to customize cooling for specific components in your facility, says one expert.

A typical sales presentation involves a list of product features and an explanation as to why that feature provides an important benefit. In an effort to stand out, manufacturers pile on features (and cost), resulting in a huge list of capabilities that no one needs, uses or understands. But think how different the situation would be if product features were selected based on specific project requirements. It's done with servers in the data center and should also be done with the infrastructure equipment that supports the servers. Most data center cooling products are one-size-fits-all SKU-numbered products -- if your data center is in San Jose, Calif., you will get the same product as a data center is in Secaucus, N.J., leaving you to figure out how capacity and operating parameter differences will affect your operation.

Custom equipment grants the ability to optimize internal components and unit configuration, which allows you to tailor the equipment to maximize its efficiency based on the conditions in your space. For example, most CRAC unit manufacturers will use the same fan for a unit that has overhead ductwork as a unit that is discharging into a raised floor. These configurations see totally different conditions at the fan, and by optimizing the fan for a specific application you can save close to 50% of the power used to move the air. The backpressure that a fan has to overcome will be different depending on if the fan is ducted or discharging into an open raised floor. Rather than selecting a single fan that works pretty well at all conditions, select a fan for the actual conditions. A different fan diameter or different number of blades will result in an optimized operating point.

While a data center in Phoenix would benefit from a higher-capacity economizer coil, that same coil would waste energy if it were installed on a project in New York.

All the power that's fed to the servers is converted to heat that must be removed from the data center. For typical operating conditions, 125 cubic feet per minute (CFM) of cooling air is required for each kilowatt of server power. A 2 megawatt (MW) data center might have 1,500 feet of floor area and require 250,000 CFM of airflow. Just a few years ago, this same space might have only required 25,000 to 30,000 CFM of airflow. The most common air-handler configurations for removing heat from a data center is with CRAC-type units located in the data center space or with ducted air handlers that are located outside the data center. The trend of close-coupled or in-row cooling is quickly gaining traction, especially in high-density spaces. Which data center cooling system is best for a specific data center is a function of energy costs, space availability, maintenance capability, user requirements and a host of other issues beyond the scope of this article. However, we do believe that once a system type is selected, the ability to customize that product to maximize the efficiency and minimize the operating costs will result in three major benefits: reduced energy consumption, greater temperature control and easier configuration.

Reduced energy consumption
A typical CRAC unit uses .6 kW of energy to move 1,000 CFM of air. For the 2 MW space described above, the typical CRAC system would require 150 kW of fan energy. Clearly, any opportunity to reduce this power consumption should be pursued. Optimizing the internal components based on the specific characteristics of the data center will result in energy savings in the range of 50% in most cases.

Temperature control
Controls are another area where current requirements are significantly different than what was done in the recent past. Five years ago, when a 1,500-foot space had two or three units to provide cooling and redundancy, it made perfect sense to provide independent controls for each unit. Even though adjacent units might occasionally fight each other, things worked pretty well. In a modern data center, the air temperature at the inlet of the racks is the important variable to control. There will always be some variability, but the results will be much more reliable if the airflow and temperature is controlled on a global basis. This means a central control system looks at temperatures in the facility and makes decisions about how fast fans should run and what temperature the air blown into the space should be. A custom solution will allow you to configure a control system that interfaces seamlessly with the rest of your data center. This customization could be as simple as making sure the cooling equipment controllers are manufactured by the same company that controls the rest of the facility, or it can be as elaborate as looking at the kilowatts consumed by the server racks and controlling the equipment based on the power consumption of the IT equipment. An off-the-shelf/proprietary controller may be cost-effective, but it limits your ability to control the cooling equipment in the most energy-efficient manner.

More on data center cooling efficiency:
The future of data center cooling: Q&A with Bob Sullivan  

Data center cooling: Air-side and water-side economizers  

Sizing computer room air conditioners for data center energy efficiency

Ease of configuration
A customizable product allows you to build a unit that fits into the space rather than compromise on access and serviceability. A custom unit can be designed so that all service is done through the front or a single side of the unit, allowing you to place other equipment next to the cooling units. A custom unit also allows for features such as air-side or water-side economizers to be integrated into the equipment in lieu of added on as a partially functional afterthought. There are very few data centers that would not benefit from an air-side or a water-side economizer, but like the fans, coils and controls discussed above, relying on a one-size-fits all option does not optimize the number of hours during which free cooling can be used. A data center in Phoenix would benefit from a higher-capacity economizer coil, while that same coil would waste energy if it were installed on a project in New York.

Traditional computer room cooling equipment manufacturers offer limited sizes and component selections. As described above, being able to tailor the fan selection to the project conditions can result in reduced energy required to move the air. Flexibility in coil selections may allow you to operate the chiller plant at a more efficient operating point or reduce the pressure that the unit fan has to overcome.

Modern data centers are dramatically different than the computer rooms of just a few years ago, but many facilities still use mechanical cooling systems like those you would find in an obsolete computer room. Today's data center needs large volumes of neutral temperature air, and the amount of air required has increased by an order of magnitude – therefore, any improvement in efficiency is much more important to operating costs than it was in the recent past. And at the end of the day, your data center is different than one across town or across the country. To rely on standard products that cannot be optimized is ignoring significant potential for energy savings.

ABOUT THE AUTHOR: Dan Hyman, co-founder and Principal at Custom Mechanical Systems, has 25 years of experience designing custom HVAC systems for mission-critical facilities, such as clean rooms, hospitals, research labs and data centers. These facilities all share a need for high reliability and low energy usage. Custom Mechanical Systems provides products that are tailored to users' needs and budgets to allow them to build reliable and energy-efficient facilities.

What did you think of this feature? Write to's Matt Stansberry about your data center concerns at

Dig Deeper on Data center design and facilities

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.