News Stay informed about the latest enterprise technology news and product updates.

Power and cooling woes undercut blade server benefits

Blade servers offer space- and power-saving benefits. But organizational constraints and data center design may continue to tip the scales in favor of rack servers.

This is part two in our three-part series on blade servers. Here we examine the space- and energy-saving benefits of blade servers compared with rack-mount servers and explore other facilities-based concerns.

In part one of this series, we explored whether blade servers offer cost savings over rack-mount servers, and the answer is less than straightforward. When it comes to power consumption, workload capacity, space conservation and other concerns endemic to data center management, the choice between rack-mount and blade servers is equally complicated. In this, part two of our series, we address the infrastructural concerns for data center managers facing a choice between blade and rack-mount servers.

For more on blade servers:
Has blade server technology matured far enough? (part 3)

Are blade servers a viable alternative to rack servers? (part 1)

The prospect of saving space
As far as space-saving possibilities, blades offer a clear edge over their rack-mount counterparts. According to an Aperture Research Institute survey of more than 100 data center professionals -- some of whom manage several hundred centers -- organizations are increasingly responding to this dilemma by turning to higher-density blade computers. In the survey, more than 87% of respondents reported that they use blades in their data centers; in a 2007 survey of TechTarget readers in North America, nearly 53% of respondents reported that they use blades.

Blades also offer power-saving potential; they tend to consume less power than equivalent rack servers. In a recent test by  Sine Nomine Associates conducted on behalf of Hewlett-Packard Co., the current generation of IBM and HP blades consumed less power on a per-server basis than a single standalone server.

The tests included various configurations of IBM Corp.'s BladeCenter H chassis with 14 HS21 servers, HP's HP BladeSystem c7000 chassis with 16 ProLiant BL460c servers, and IU standalone servers from IBM and Dell Inc. In the best-case scenario, HP blades required 34% less power, and IBM's needed 12% less power than the most efficient 1U rack.

Ten blades may throw out 20% more heat in the same amount of floor space than traditional servers, but it's not something that can't be solved by doing some planning.
Philip Skeete, 

 Vendor champions of blades highlight these energy-saving capabilities. According to Scott Tease, worldwide product manager for IBM BladeCenter, for example, a single 10 kilowatt rack that holds 24 1U servers drawing 414 watts will accommodate 36 IBM BladeCenter servers drawing 274 watts: a difference of 34%. And HP reports that 16 BL460c blade servers consume 30% less power than the equivalent number of ProLiant DL360 servers.

The problem of power and cooling
So with clear space- and power-saving possibilities, why do many data center managers continue to have reservations about blades?

The traditional knock against blades is that their higher density requires more power per rack than a comparably sized rack of traditional servers. While individual blades are more energy efficient, the overall power-per-blade frame is higher. And that requires delivering more power overall to a frame of blades than to one of racks. For those data centers that can't get additional power from their utility companies, blades are an untenable option.

Even for data centers with adequate power sources, blades still pose issues. They generate more heat per square foot than rack servers , which can compromise the ability of a data center to provide adequate cooling.

"Cooling and power draw are considerations," said Clay Ryder, president of research firm the Sageza Group Inc. "In overall power consumption, blades should come out ahead," he said. But "if you consolidate the number of servers into a smaller amount of space, that can possibly change the heating profile of the data center."

The end result: Higher-density blades contribute to data center capacity issues and power costs rather than mitigate them. In some cases, organizations using blades save floor space but fill it with additional servers or equipment, introducing additional power usage and cooling costs that can further detract from power savings.

Retrofitting for blades
If power and cooling issues don't undermine the benefits of blades, organizational capacity to house them may be the final straw.

Many companies are not rated to handle a high-density blade server environment. In the Aperture survey, nearly 30% of respondents said that the average power density of their racks ranged from just 3 kilowatts to 6 kilowatts, while 22% said the average fell between 7 kilowatts and 12 kilowatts. When it came to high-powered racks, about 9% put the average at between 13 kilowatts and 18 kilowatts, and nearly 7% said the average power density was more than 18 kilowatts per rack. As for the maximum power density of racks, a majority of respondents -- 53% -- fell into the range of 3 kilowatts to 18 kilowatts. Only about 6% of respondents said they have a maximum power density per rack that's more than 30 kilowatts. Yet a fully loaded rack of blades could easily require anywhere from 20 kilowatts to 24 kilowatts, leaving many companies unequipped to adopt blades.

In effect, blades can exacerbate data center inadequacies, but they aren't necessarily the cause, at least according to Philip Skeete, president of  Conxerge, a managed service provider and blade user. For Skeete, adopting blades requires making corresponding adjustments in data center design to accommodate the additional power and cooling requirements.

"We have had blade installations at customer sites that have overheated," he said, "but it was because the customers didn't have the proper cooling" systems in place. "Ten blades may throw out 20% more heat in the same amount of floor space than traditional servers, but it's not something that can't be solved by doing some planning up front. You might need to retrofit the [air conditioners] and put more power circuits and vents in so that you can drop more cold air in front of the blades."

Barb Goldworm, president of Focus Consulting Inc. and author of Blade Servers and Virtualization: Transforming Enterprise Computing While Cutting Costs, concurred. "There's a lot of easy and inexpensive things [users] can do to make their environments capable of blades." Among the measures that Goldworm recommends are the following:

  • introducing hot aisle and cold aisle rack arrangements;
  • clearing out the spaces beneath a raised floor;
  • installing blanking panels;
  • and
  • strategically locating vents.

Ideally, data centers should plan any changes to facilities up front and well before they deploy blades. Various resources, such as vendors and consultants that offer data center advisory services to help clients accommodate the power and cooling requirements of blades, can help minimize data center managers' pain in migrating to blades.

The tipping point
When retrofitting a data center isn't viable, heating and cooling issues tip the scales in favor of staying with rack servers. Walt Crosby, chief architect at Everyday Wireless, a provider of Global Positioning System-based vehicle and student tracking systems, has chosen to stay with rack servers for several reasons, among which are facilities-related concerns.

Crosby's concern is that if Everyday Wireless were to outsource, a hosting company wouldn't be able to accommodate blades if it is already at or near capacity in terms of heating and cooling.

"The heat/power consumption per rack is so outrageous," he said. "It blows typical rack power and air-conditioning usage all to hell." While adding vents and blanking panels may solve those issues, customers of hosting companies have little say about whether those measures are carried out, and Crosby is unwilling to risk that a hosting company may not design its data center properly.

Over the past few years, blades have come a long way in terms of power consumption. With shared components and efficient internal cooling mechanisms, blades compare favorably with traditional servers. But data center managers have to go beyond simply comparing utility bills. The increased density of blades can create facilities-related challenges, in terms of both the overall power supply provided to a data center and the existing cooling profile. As with any technology, blades have pros and cons, and it's up to individual data center managers to evaluate blades' advantages and drawbacks to decide what works best in the context of their organization.


Let us know what you think about the story; email Megan Santosus, Features Writer.

Dig Deeper on Server hardware strategy

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.