and other power-saving cooling strategies.
Let's face it: A lot of green data center case studies are pretty worthless. Vendors and customers pat one another on the back for buying green products and offer vague promises to save energy in data centers over a period of time.
But the facilities department at the United Parcel Service of America Inc.'s Alpharetta, Ga., site are about to save you a lot of money on your data center air-conditioning bill today. Joe Parrino, data center manager at UPS' Windward data center also explains his organization's load-shedding process and proves that using outside air to cool a data center can work—even in the hot temperatures of the southeastern U.S.
Brown goes green in the data center
UPS' Windward data center bucks the conventional wisdom. Old data center facilities are supposed to be inefficient, and outdated mechanical systems are primarily to blame. Even worse, considering the amount of redundancy designed into the facility to prevent downtime, an Uptime Institute Tier 4-rated data center would have to be a real energy hog.
But somehow the 13-year-old, Tier 4 facility in Alpharetta scores a power use effectiveness (PUE) as low as 1.9 or, in the Uptime Institute's parlance, SI-EER. This ratio represents the measure of the power going into the facility at the utility meter divided by the power going to the IT load, measured either at the power distribution unit or uninterruptible power supply.
In the case of the Windward data center, PUE was measured at the output of the uninterruptible power supply; measuring the output of the PDU was too difficult. For a more detailed discussion of the differences in measuring at the power distribution unit versus at the uninterruptible power supply, listen to the podcast "Where to measure IT vs. infrastructure power use: PDU or UPS?" with Pitt Turner.
According to the Uptime Institute, the average ratio is 2.5. This means that for every 2.5 watts going "in" at the utility meter, only 1 watt is delivered out to the IT load. In this regard, United Parcel Service's Windward data center is way ahead of the curve. But how did the company do it?
Cutting out the air handling units
Forced-air cooling is one of the least efficient systems in data center infrastructure, and wasting cold air is the most common mistake in data center management. You can set up hot aisle/cold aisle, install blanking panels, and seal gaps in the floor, but you've probably still wasted cold air in a place you wouldn't expect: the perforated top of power distribution units.
Parrino's staff learned this by chance. The team noticed the perforated roof on a PDU as it sat in a hallway waiting for installation. They took airflow measurements on several installed units using a velometer and calculated the cubic-feet-per minute (CFM) loss (i.e., the velocity of the air multiplied by square footage of the opening). United Parcel Service determined the units lost 2,000 CFM per PDU.
The next step was to seal the top of PDUs with Lexan covers. Parrino hired a contractor to install covers on all the units. The covers have a three-inch opening to ensure that the transformers get airflow but also block 90% of undesirable bypass airflow. Following the installation of the Lexan covers, average transformer temperature increase was around 1 degree to 2 degrees Fahrenheit.
"After we installed the covers, we looked at the under-floor static pressure and we were amazed at what we got back," Parrino said. The data center had 62 PDUs that were wasting 124,000 CFM of cold air. With the covers installed, Parrino estimated that he could shut off six computer room air handlers [CRAH] based on measured airflow of 19,000 CFM per CRAH unit. In reality, he shut off 10.
The cost of covering PDUs was about $6,000, and United Parcel Service estimated that payback would take about 4.3 months. Instead the project actually paid for itself in only a month and a half.
Parrino said he plans to implement variable frequency drives on some of Windward's CRAH units, and his team is experimenting with variable air volume floor grates controlled by intake temperatures of the racks. "This will slow the consumption of CRAH fan energy even further by delivering the CFM that's needed for each rack instead of delivering based on the worst-case IT load," Parrino said.
ABOUT THE AUTHOR: Matt Stansberry is SearchDataCenter.com's senior site editor. Write to him about your data center concerns at firstname.lastname@example.org.
This was first published in January 2008