Heat relief for data centers using blades

Blade servers are hot—literally—and creating a living hell for some data centers. As data center managers stew over the problem created by these high-density servers, vendors are scrambling (and finding huge revenue opportunities) to develop new technologies that will make heating issues a thing of the past.

Thom Beck, chief information officer of Truliant Federal Credit Union in Winston-Salem, N.C., could have worn a parka and stockpiled Klondike bars in his company's old data center, where at least four blades and lots of wasteful air conditioning were running. But when Truliant built a new facility, Beck enlisted American Power Conversion Corp. (APC) to design a more efficient cooling system.

The new data center, running a rack of 10 Dell blades, is about the same temperature as Beck's office.

"We're no longer pumping air into a room, we're directing it," Beck said. "The cool air is only blowing on the servers that need it."

Beck pointed out that his new data center is more than double the size of the old one, but the new design has cut down on cooling costs.

Blade servers, which are selling like hotcakes, put out nearly as much heat as the griddle. The problem isn't new, but data center managers are just now beginning to feel the rise in heat, not only in the data center, but in their tempers. And while blowing loads of cold air through vents -- in essence turning the data centers into one of those walk-in beer coolers at the grocery store -- may have staved off the problem temporarily, it just doesn't cut it anymore from a cost or comfort standpoint.

Packin' heat

According to a recent report from Stamford, Conn.-based research firm Gartner Inc., blade server shipments will grow at a compound annual growth rate of 29% over the next five years and will comprise more than 20% of global server shipments by 2010. IDC in Framingham, Mass., predicts that every third or fourth server will be a blade server by 2007. This means that more third-party power and cooling companies like West Kingston, R.I.-based APC and Liebert Corp. in Columbus, Ohio, could expect calls from potential cool customers ready to chill, but not be chilled.

Liebert is seeing dramatic growth in no small part because vendors are more concerned with packing their blades with processing power than cooling solutions, according to Steve Madara, vice president and environmental business unit general manager.

"Blades are getting hotter if anything," he said. "We're seeing lots of blade deployments and many data center managers getting caught by surprise by the heat loads."

"Blades are being deployed in environments that aren't designed for them," added Russell Senesac, director of Infrastructure systems at APC. "And there's no proof that the power density of blades is going down."

Madara said Liebert is working with all the major server vendors to cool blade technology, some of which is still two or three years away. "We're making solutions to take cooling inside the rack, not just the chip," he said. "There's no physical limitation to cooling what they've got on the drawing board, but it's a challenge to pack more cooling into the footprint every day."

All server vendors are acutely aware of their customers' cooling concerns, according to Kelly Quinn, senior research analyst with IDC, and by the end of the year, most will have pitched technologies designed to address the issues. Independent experts are not betting on which technology will work best, but admit that most have their merits.

In July, IBM, introduced Cool Blue, a liquid-based cooling technology that attaches to the back of the rack, bringing water back into the data center. Water, according to some experts, is the most efficient way to cool anything. While Quinn said it remains to be seen if other vendors follow suit, other experts predict by the end of the year, there will be a number of vendors introducing liquid-cooling into their technologies.

One company expected to hit the market soon with a liquid-cooled solution is Liberty Lake, Wash.-based ISR, makers of SprayCool. ISR uses dielectric fluid to cool electronics. The company is talking with OEMs to incorporate its technology into blade servers, but it currently offers a retrofit version of the product.

Dual core to the rescue?

Vendors might also be leaning on cooler-running, dual-core processors from Advanced Micro Devices and Intel, which become available in November.

"Really look at dual core," Quinn said. "It will help abate cooling issues." She added that dual core, particularly with 64-bit blades strapped on top, bring performance benefits to the table that present a compelling alternative for what's available now.

Dell Inc. spokesman David Lord said his company's PowerEdge 1855 blade server is already designed to give optimum cooling and pricing advantages and should run even cooler with the new Intel dual-core processors.

But dual core is no magic pill for hot data centers, according to Liebert's Madara.

"[Vendors] might say they're putting in dual-core processors to make their blades cooler, but they will always try to put more processing power into a smaller footprint," he said. "The final configuration will be hotter."

In addition to taking advantage of the new processors and working with third-party cooling companies, Dell also has a data center environment assessment service to help customers pinpoint the hot spots.

"Our folks go into the data center environment and use thermal imaging to find hot and cool spots," Lord said. "They try to find the best places to put equipment to optimize physical space and air flow."

Hot topic

Whether it's through Dell's data center feng shui, more efficient machines or more targeted cooling, heat management will continue to be a hot topic for data center managers.

Tony Iams, senior analyst with Ideas International in Port Chester, N.Y., said users are more concerned about power management than performance, especially data center managers who work in urban areas where expansion or a movie is cost prohibitive.

He sees the ability to dynamically turn units on or off according to fluctuating workloads -- reassigning power even within chip -- as the next frontier.

"We're not there yet, but researchers are working on it," Iams said. "It will allow you to optimize utility computing at the level of power consumption."

This was first published in October 2005

Dig deeper on Data Center Design and Facilities

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close