Google leads by example in data center energy efficiency

Google blazed the trail in ultra-low data center PUE and other data centers can replicate its results with planning.

This Content Component encountered an error

This news article is part of SearchDataCenter.com's coverage of the Uptime Institute Symposium. Read more related content and news from this event: Uptime Institute Symposium 2011 coverage.

SANTA CLARA, CALIF. -- Google data centers boast an ultra-low PUE. And yours can too if you follow its guidelines, said a Google executive.

Fix cooling first, because that is the area with the most opportunity for improvement.

Chris Malone, Google thermal technologies architect

Chris Malone, Google thermal technologies architect, detailed the best practices the Internet search giant used to achieve an average power usage effectiveness (PUE) of 1.16 for 10 of its data centers. By following these principles and adopting the most efficient commercially available products, the typical data center operator can set an “easy target” of a PUE in the 1.5 to 1.6 range, and an aggressive target of 1.2, Malone said at the Uptime Institute Symposium here.

Developed by the Green Grid, the PUE metric is a ratio of total data center facility power divided by IT equipment power. The goal is to get as close to 1.0 as possible. In a recent Uptime Institute survey, data center operators and owners reported a median PUE of 1.8, meaning that for every 1 watt consumed by IT equipment, 0.8 watts are spent on power conversion, cooling and the like.

The usual suspects

The easiest way to drop data center PUE is to focus on cooling, Malone said.

“Fix cooling first,” Malone said, “because that is the area with the most opportunity for improvement.” In a typical data center with a PUE of 2.0, IT equipment consumes 50% of power and cooling is close behind with 35% of the total load. In contrast, at a Google data center of 1.16 PUE, IT consumes 86% of the power and cooling just 9%.

Data center pros can tackle cooling inefficiencies by managing airflow (separating hot and cold streams), raising data center operating temperatures, and using economizers and outside air, Malone said.

Smart power distribution and backup strategies can also boost energy efficiency. On the power distribution front, Malone advocated minimizing conversion steps.

“Converting once is ideal,” he said. Data centers should use best-in-class UPS products. “If you pick best-of-breed commercial products and follow best practices, you should be able to reduce your [PUE] overhead to 0.24,” he said.

Even small changes can make a big impact on PUE, Malone said. Google invested about $20,000 on a 250,000 kW network equipment room with a PUE in the 2.2 to 2.5 range and was able to drop the PUE to 1.6, he said. Some techniques they used included improving perforated tile layout, turning down excess cooling to 80.6º F, raising the relative humidity and decreasing hot aisle/cold aisle mixing. Google recouped its investment in less than a year, without implementing cold-aisle containment, without implementing free cooling, and without compromising redundancy and uptime, he said.

Doable moves

Data center operators at the exhibit said Google gave solid advice.

“For sure, we can achieve these sorts of numbers,” said Juan Murguia, data center manager at CEMEX, a cement provider in Monterrey, Mexico. Simple solutions like containing hot air and putting chimneys on computer room air conditioners (CRAC) “really work,” he said, and “it’s not a lot of money to do those innovations.”

Steve Press, executive director for data center facilities services at Kaiser Permanente, concurred. Tackling cooling inefficiencies in a legacy data center helped lower PUE from 1.67 to 1.45, and thanks to companies like Google, lowering one’s PUE is becoming increasingly easy, he said. “Folks like Google have set the bar and made it possible for us to go out and purchase [energy-efficient] technology,” he said.

But data center operators have a lot of work ahead of them to get to these levels. While the median PUE may be 1.8, only 18% of Uptime Institute survey takers reported a PUE at or under Google’s “easy target” of 1.6, and 21% reported a PUE of 2.0 or greater.

This news article is part of SearchDataCenter.com's coverage of the Uptime Institute Symposium. Read more related content and news from this event: Uptime Institute Symposium 2011 coverage.

Let us know what you think about the story; email Alex Barrett, News Director at abarrett@techtarget.com, or follow @aebarrett on twitter.

Dig deeper on Data center energy efficiency

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close