LAS VEGAS -- What you don't measure you don't know, said Christian Belady, a technologist at Hewlett-Packard Co. (HP), during the keynote address at AFCOM's Data Center World conference on Tuesday.
Belady relayed the story of having visited a customer and recommending a hot-aisle, cold-aisle configuration. The company implemented it, called Belady three months later and told him it was reverting back to the old setup because the IT employees found the hot aisle, well, too hot. When Belady protested and said the hot aisle/cold aisle was more efficient, the customer said that was possible, but it had no way of measuring it.
"If you can't measure, you won't improve it," Belady said. "How do you know if you're running an efficient data center operation?"
Last year Belady created a metric that compares total power into a data center with the power that gets to the IT equipment. Belady's metric, called power usage effectiveness (PUE), is gaining traction in the industry with support from The Green Grid and the American Society of Heating, Refrigeration and Air-Conditioning Engineers (ASHRAE). Meanwhile, other papers and studies by the federal Environmental Protection Agency (EPA), the Lawrence Berkley National Laboratory and SPEC are also looking into developing an efficiency metric for servers.
Data center efficiency is high on data center managers' minds these days, especially with issues of power and cooling taking center stage. Mike Krings, an IT supervisor for the state of Montana's Department of Administration, said that Belady's speech had some good ideas, but the "proof is in the pudding." It all depends on what he can bring back.
Krings is dealing with power issues in his data center, saying that "anybody that isn't (dealing with those issues) isn't in the data center." His department outgrew 5,000-square feet of raised floor and had to move onto 1,000-square feet of nonraised floor, where he has had to use pedestals to elevate air conditioners that blow air out at ground level and disperse it throughout the room.
The difficulties Krings has include convincing people in his department that best practices, such as hot aisle, cold aisle are good.
"The cooling issue is getting the technicians to believe that there is a proper way of doing it instead of helter skelter," he said. "Some of them want racks all facing the same way."
Still, Krings was able to relocate some racks, move an air handler closer to servers, and work with perforated tiles so the cool air gets to where it needs to go.
There are many other ways companies can save power costs in the data center, and Belady listed some of them: Throttling the CPU down when server utilization is low, using virtualization to fight low server utilization, deploying blades as shared resources and closely coupling servers with cooling.
But all of those practices need to be tested, Belady said. Otherwise, there is no incentive for companies to adopt them because it may just sound like more marketing talk. That's where the PUE comes in.
"When I plug the cable cutouts and go hot aisle, cold aisle, does my efficiency number improve?" he asked.
Belady said that he is crafting a study looking at PUE for a number of data centers, including HP's facilities and government data centers provided to them by Lawrence Berkley Labs. He expects it to be out this summer.
Let us know what you think about the story; e-mail: Mark Fontecchio, News Writer.