Data Center Decisions attendees point to power and cooling woes

Attendees at this week's Data Center Decisions conference in Chicago said power and cooling are the primary limiting factors to data center performance.

CHICAGO – Half of the attendees at the keynote address for Data Center Decisions conference yesterday said power and cooling was the primary limiting factor in the performance of their data centers.

More on power and cooling:
Revised server energy efficiency standard due by year's end

Liquid cooling book promotes standardization

Standby: Fuel cells as secondary power sources

Glen Mirocha, facility manager of critical environments at Johnson Controls, a data center hosting company in Chicago, said their 30,000-square-foot data center is dealing with undersized UPS devices and needs to upgrade the cooling system from compressed air to chilled water.

The current CRAC units aren't cutting it, and bringing in chilled water -- to use either in forced air units or as liquid cooling -- is what he sees as the future of his data center.

"As we put more equipment on the floor, we need to increase our cooling capacity," he said. "We have customers coming in bringing some hot stuff."

Mirocha sees inflexibility from the vendors in terms of being able to retrofit his current cabinets to cool the hardware inside. Most of them, he said, want you to buy all new cabinets, which cost thousands of dollars more each. When you're talking hundreds of Unix and Wintel servers, replacing cabinets turns into a serious cost inhibitor.

"I need someone to retrofit cold water into it," he said. "It can't be that hard. If these guys would get smart and retrofit, they'd make a ton of money. You need to get your money out of these cabinets. Some of them are expensive."

Ronald Witt feels his pain. The OS administrator has been overseeing more data center operations lately for Mortgage Guaranty Insurance Corporation, based in Milwaukee, Wis. He's dealing with a former mailroom that was converted to a data center with a raised floor and air handling before he started working at the company 13 years ago. Now they have a mainframe, 100 Sun V240 Solaris-based systems, and 170 IBM and Hewlett-Packard Wintel servers.

Mailrooms weren't built to handle that sort of power load.

To handle the influx, the company is "slowly migrating to hot aisle, cold aisle," Witt said. They're also looking to virtualize their most recent hardware to get more use out of them – 10% to 15% utilization rate isn't satisfactory – which would allow them to dump old servers and consolidate the rest.

"I've got a lot of processing power sitting idle," he said. "We need to be able to better leverage some of our resources."

Experts agree that power and cooling will continue to dominate data center landscapes. IDC estimated spending of almost $6 billion in data center power costs last year, and that for every dollar spend on hardware, 25 cents is being spent on powering and cooling that equipment.

Michelle Bailey, research VP for data center trends and enterprise platforms at IDC, gave the keynote at the conference yesterday and said many data centers are wasting energy powering servers that aren't holding their end of the bargain. The IDC estimated in a study earlier this year that there are $140 billion in unused server assets. But she said the awareness is there, and that in itself is a start.

"For a really long time, I think the data center as a unit was being ignored," she said.

What could also help data centers manage their energy bills is on the horizon. Major manufacturers, in conjunction with the federal Environmental Protection Agency, are buffing up a protocol to measure the energy efficiency of 1U and 2U rack servers. The protocol would measure the power draw of the servers at certain workloads and calculate the quotient as the standard, and is expected out early next month.

Jonathan Koomey, a professor at Stanford University and a staff scientist at the Lawrence Berkley National Laboratory, said the important word of the day was "measurements." That is, if you can't quantify the amount of power your hardware is sucking up, it's difficult to know what to do about it.

Koomey suggested making sure your lighting is up-to-date and the airflow in your data center isn't mixing hot and cold air. Servers are also a big part of that.

"The power of the server drives the investments in the data center," he said.

Let us know what you think about the story; e-mail: Mark Fontecchio, News Writer

Dig deeper on Data center backup power and power distribution

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close