Data center efficiency: Which tactics are worth the cost?

Which data center energy-efficiency tactics are worth the effort? Which data center energy-efficiency measures do not return on investment? This tip illustrates how much you can expect to save on various green IT improvements.

This Content Component encountered an error

Data center managers are not going "green" out of benevolence to the environment and society. It's all about keeping costs low. In this tip, I will explore various data center technologies that claim to improve efficiency and indicate those that are cost effective; saving money from the bottom line of the operations.

Improving data center power use: Low-hanging fruit

There are a number of simple and relatively inexpensive changes that organizations can make to reduce energy and operating costs.

  • Raise data center temperature. Early in 2009, the American Society of Heating, Refrigeration, and Air-conditioning Engineers (ASHRAE) broadened the recommended data center temperature and humidity ranges. This spawned a discussion about reliability at higher temperatures.
    Hardware vendors publish operating temperature and humidity ranges for their equipment, but what's interesting is that most calculate the equipment mean time between failure (MTBF) at the extremes of these ranges as opposed to the best possible. Examples for a couple common pieces of equipment: Dell PowerEdge r805 is 50-95 degrees Fahrenheit and a Cisco Nexus 5000 is 32-104 F. Raising the data center temperature to 80 F wouldn't be a problem for this modern hardware. Raising the temperature puts less strain on the air conditioning equipment -- traditional refrigeration requires near the same amount of power to operate as the equipment it is cooling, so reducing this consumption can result in lower electrical bills over the course of a year. A dramatic change in energy consumption of up to 30% will be noticed if the temperature of the computer room air conditioning (CRAC) unit can be raised above the dew point. CRAC units running so cool that they condense water from the air (and require humidifiers to add moisture back into the air) require up to 30% more cooling capacity and corresponding energy.
  • Hot-aisle/cold-aisle containment. Many customers have >improved data center cooling effectiveness by implementing simple plastic curtains to contain hot air and prevent it from mixing with cold air.
    Efficiency gains depend on how well your data center airflow patterns prevent hot and cold air mixing on the data center floor, but containment can improve air-conditioning efficiency as much as 15%. This can translate into annual electric bill savings of nearly $12,000 per year (assuming 500 servers and an electrical rate of 8 cents per kilowatt hour.)
  • Air-side economizers. Data centers located in cooler climates can use the environment to cool their servers, reducing the need to operate electricity-hogging refrigeration equipment. Estimates are that in cooler climates, air-side economizers can reduce electrical bills by as much as 33%. In addition, ASHRAE standard 90 requires air-side economizer implementation in certain parts of the U.S., specifically in the dryer, cooler western regions and some cooler northeastern regions.
  • Consolidation with virtualization. Server virtualization has proven to reduce data center capital and operational expenses over and over in the industry, but this assumes a consolidation ratio that will offset the cost of the virtualization software licensing. A simple calculation that assumes a server cost of $8.5K, virtualization software license cost of $3K both amortized over three years, plus power, maintenance, and administrative costs, will yield a capital plus operating cost reduction of nearly threefold if the server workload consolidation ratio is 10:1. This is a total cost reduction, with the energy reduction from server consolidation nearing tenfold.

Questionable energy-efficiency options

Various technologies are touted as energy-saving, hence reducing operating costs for data centers. But many only achieve savings after a very long payback time.

  • High-efficiency server power supplies. Fortunately, new servers all come with digitally controlled power supplies to ensure greater than 90% efficiency throughout the power supply's load range. But replacing an older server before its end of life in order to gain power savings is a bad move. The payback will not be gained. It is best to let older equipment run its lifecycle course and replace it as it drops out of warrantee and serviceability.
  • UPS and power distribution upgrades. While improvements in energy efficiency can be had by upgrading uninterruptable power supplies (UPSes) and power distribution units (PDUs), the payback is in years. Like the higher-efficiency server power supplies, replacing an 80%-efficient UPS with a 97%-efficient unit before its scheduled end of life will never yield a payback. Again, it is best to only upgrade when the older unit is no longer serviceable by its manufacturer.
  • Direct current power. Direct current (DC) power distribution within data centers was recently pushed by vendors. Be wary of efficiency improvement claims, as most compare modern DC power distribution to alternating current (AC) power distribution dating back 20 years or more. Compared to modern AC distribution systems, DC can only achieve a percentage point or two better than AC. The added cost plus difficulty in finding electricians skilled in DC power distribution will never achieve payback for its miniscule efficiency improvement.
  • Air-side economizers. Yes, I put this one as questionable as well. Hot and humid regions of the world cannot benefit from air-side economizers, and they will end up as an added expense that never achieves any payback in those regions. The  ASHRAE standard 90 illustrates this point and does not recommend air-side economizers for hot and humid locations.

Effects of energy-efficiency regulations

Energy-efficient solutions that result in operational cost savings are easy for data center managers to justify. The Dec. 2009 climate change summit in Copenhagen, Denmark, is an indication that world governments are paying attention to energy consumption.

Regulations designed to curb energy usage growth will no doubt be forthcoming around the world. The U.S. carbon tax debate will most likely get additional attention in the U.S. Congress in 2010. Energy-related regulations will all be designed to encourage data center energy reduction. Data center managers should have contingency plans in place to improve efficiency to meet regulatory demands, should they be enacted.

Bottom line

Parsing the list of low-hanging fruit, it is not hard to see that server virtualization for consolidation has the greatest potential to reduce energy consumption as well as costs within the data center. Coupled with improvements in x86-based servers in 2009, such as hardware-assisted memory virtualization, many applications that organizations had deemed not fit for virtualizing can now be virtualized. Data centers should assess the suitability of nonvirtualized servers for virtualization once every six months as servers released since early 2009 are all being designed specifically for virtualization.

About the author:
Richard Jones is vice president and service director for Data Center Strategies at Midvale, Utah-based Burton Group. He can be reached at rjones@burtongroup.com.

What did you think of this feature? Write to SearchDataCenter.com's Matt Stansberry about your data center concerns at  mstansberry@techtarget.com.
 

This was first published in January 2010

Dig deeper on Data center energy efficiency

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close