Gartner predicts data center power and cooling crisis

High-density server equipment, notably blades, will make half of the world's data centers "functionally obsolete" as early as next year.

At the Data Center Power and Cooling Challenge seminar held at the Gartner IT Infrastructure, Operations and Management Summit 2007 this week, an attendee poll of the 100 or so attendees uncovered that the greatest facility problems with data centers are insufficient cooling (37%) and insufficient power (43%).

Of those polled at the session, 93% said they will expand/upgrade, relocate or renovate their facility to accommodate power and cooling needs over the next year.

"Power and cooling is a pandemic in the world of the data center," said Michael Bell, research vice president at Gartner Inc., who headed the seminar. "By next year, about half the world's data centers will be functionally obsolete due to insufficient power and cooling capacity to meet the demands of high-density equipment."

More on data center power and cooling:
How can I lower power and cooling costs?  

Data center liquid cooling vs. forced air cooling

Uptime Institute Symposium 2007 Special Report

Technologies are coming into play to curb this dismal Gartner prediction, for example, in-server, in-rack and in-row cooling. By 2011, in-rack and in-row cooling will emerge as the predominant cooling strategy for high-density equipment, and in-server cooling technologies, like the one provided by SprayCool Inc., will be adopted in 15% of the leading server products, Gartner predicts.

To no surprise, servers built for density, particularly blades, are exacerbating the problem. Stacking these servers into a small footprint requires more watts per rack.

"When servers where all just pizza boxes that were moved around, it wasn't as big of an issue. Now we have blades -- or flame throwers -- and power and cooling is a problem," said Kenneth Uhlman, director of data center business development at Eaton Corp. during his session.

Gartner predicts blade installations will reach 7,200,000 by 2011. Looking at this from an economic standpoint, today, if you have two servers in a rack, the cost of running that rack per square foot is $112.13. If you put six servers in that same rack, the total cost per square foot is $420.48, and squeezing 12 blade servers in that rack will cost $1,261.44 per square foot, Gartner estimated.

Power and cooling floor plan

Solutions to the power and cooling conundrum are a matter of strategy, the most important one, Bell said, being location.

"Of all the decisions you make regarding the data center, where you put it is most critical. Putting a data center in downtown New York is just wrong. You need high availability of electricity, low power rates and a moderate climate -- places like Oregon -- so you can use the outside air to cool the data center," Bell said. "Why do you think Google located its data center right near a river and a hydroelectric plant? Other similar businesses are following because the availability and cost of power is a huge problem."

The height and size of the actual buildings makes a difference too. Offices spaces are the worst type of building for a power hungry data center because of design, he said.

"I prefer warehouse type buildings with high ceilings, one floor, so you can use the geometry for cooling the space. The layout, how racks are configured, spaced, diversify the equipment to avoid hot spots, all these considerations add to cooling efficiencies," Bell said.

He also suggests provisioning certain zones in the data center with different types of cooling technologies where very hot, high-density servers can be located.

Choices, choices

And then of course, you have to choose the best technologies for your data center ecosystem, which spans from the processor to the raised floor.

"AMD and Intel are coming forward with more and more efficient chipsets, but this is not the only answer. It is a start," Bell said.

In-chassis cooling from companies like SprayCool and Cooligy Inc. are gaining in popularity, and 15% of servers will use these innovations by the end of the decade, Gartner reported.

SprayCool's system uses a nonconductive liquid cooling agent safe for electronics and people directly on the chips in the server, capturing heat at the chip level before it reaches the room. Cooligy's fluid-based, chip-level cooling technology works in a similar way.

Then there are in-rack cooling systems from Sanmina-SCI Corp., (IBM, Hewlett-Packard Co. (HP) and Knurr Inc., whose system is rated up to 35kW. The in-row cooling options by Liebert Corp.'s XD series and American Power Conversion Corp.'s (APC) InfraStruXure are very flexible options required when the 8kW-to-10kW thresholds are hit, according to Bell.

Bell said if you are using 10kW-to-25kW racks, go with in-row cooling, and anything above that should have in-rack cooling.

Power management lacking

There are a number of power and cooling management systems, but there aren't any truly integrated management system to tell IT managers the relationships between power demand and power supply in the data center, yet. Several vendors are working on it because it is a desperately needed tool to handle the constant fluctuations, and it will be available within the next couple of years, Bell said.

"There are options, but nothing like we need. Sun offers one that measures its own product, but won't work on other products. Space and power capacity monitoring products, like Aperture's Vista, will give you a sense of the power being used, but it does not monitor or control individual server energy demand," Bell said. "We should embrace the technologies available today, but it won't be until next year or so until we see a fully integrated management tool."

Also, just because your vendor said the server you bought is efficient, doesn't mean it is.

"Many vendors allege efficiency, but at what workload? It is difficult to measure due to variable workloads," Bell said.

To counter that, Bell recommended determining your average workload for servers (a range of 15% to 20%, for instance) and requesting that your vendors report product energy efficiencies based on that number.

In the meantime, there are things you can do to better manage your power and cooling woes. Use cold-aisle/hot-aisle rack configurations, avoid air leaks in raised floor, use blanking panels, maintain vapor barriers in the data center perimeter, mix high-density with low-density equipment, spread out racks and virtualize to reduce the number of physical servers and other products, Bell said.

It's also a good idea for a trusted consultancy firm to do a "health check" every 18-to-24 months.

Lower your costs

If you're using a data center hosting company, a good question to ask them is how they allocate energy costs so that low-power users aren't subsidizing high-power users. Also, ask if the facility employs power management systems, and if so, do they report the results? Can they assist in resolving tenant power and cooling issues?

Here are some key recommendations from Gartner:

  • Look at the design, layout and ways to optimize power and cooling capacity.
  • Monitor power demanded by a rack, and plan accordingly.
  • Scrutinize the service provider hosting your data center operations, and try renegotiating power costs if you are a low-power user in a shared facility.
  • Identify high-density zones within the raised floor that could use innovative technologies, like liquid cooling.

Let us know what you think about the story; e-mail: Bridget Botelho, News Writer

Also, check out our news blog at serverspecs.blogs.techtarget.com

This Content Component encountered an error

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close