Server power and cooling experts offer engineering insight

Power and cooling gurus gathered last week at the Uptime Institute's symposium on high-density computing. These excerpts are from interviews with the engineering specialists from HP, IBM and American Power Conversion Corp.

Power and cooling gurus of the IT industry gathered last week in Orlando at the Uptime Institute's symposium on high-density computing. These excerpts are from interviews with the power and cooling specialists from Hewlett-Packard Co. (HP), IBM and American Power Conversion Corp. (APC)

Christian Belady, distinguished technologist, HP:

How does Hewlett-Packard help IT departments measure power and cooling requirements?

For more information:

Uptime Institute warns against tier standard misuse

HP's rack attack: Liquid cooling

 

Water cooling takes center stage at Sun event

Christian Belady: We have calculators for detailed power, tables for every possible configuration, but what we are finding is that not everyone knows where to find them. There are also different ways to do it in the different [HP] business units, so we're trying to pull all of that data in one place. We also have services to go in and do site assessments and people from R&D [research and development] giving customers our perspective.

Are blades more energy efficient than rack mounted servers?

Belady: It depends on the application. Energy efficient or cost-effective? It would be incorrect to make one statement over the other. Otherwise, why have such a broad portfolio?

You have to look at energy cost over acquisition cost. In some cases, you're looking at high-end servers for consolidation. If you go up in the other direction, blades and 1-U servers may be better because acquisition costs are so much lower. Small server applications work better on a small server. It's completely based on the customer's needs.

Certainly, blades are going to lead the density drive. The adoption of blades is going to determine how things happen.

Is raised flooring becoming obsolete thanks to high-density server loads?

Belady: I previously worked on a program called Data Cool in a partnership with Ericsson based on my belief that if you're going to higher density, raised floor is not the answer. Raised floor isn't dead, because it keeps humidity levels at certain levels. But all of the fastest growing businesses for the APCs and Lieberts is a product that brings the cooling to the heat source. My belief back then is now the reality.

Is water going to change the data center and how should it happen?

Belady: I think it needs to be standardized. And I want to be careful about calling it water, let's call it liquid. I believe whatever it ends up being, it has to be done collectively through a standards' organization. You go to Best Buy, you buy a TV, you don't worry about how you're going to plug in. If you're going to have to replumb because you bought another manufacturer's server, we've failed and the costs are going to be too high.


Roger Schmidt, distinguished engineer, IBM:

Where does water cooling fit into the modern data center, and is it going to fundamentally change the way people look at server cooling?

Roger Schmidt: Water is coming back to the rack and in fact it is already there. You've got at least three IT manufactures [supplying that option]. As more customers start to employ that in the data center, it becomes easier to do that type of cooling. I think it's starting to take hold, and you will see more discussion on this topic at conferences. I am sure all the IT manufacturers are looking at all of the various coolants that can be used. I think processor power consumption will continue to go up with some downward trend in the short term. If you can get a performance jump from liquid cooling it probably will happen.

Back in 1964 when we came out with our water cooled mainframe there was a big education process with customers. It probably took three years to get people comfortable with it. In 1988, during the height of water cooling technology, 92% of mainframe shops were using water-cooled technology. You have to get prepared for it. If you build a new data center it might be prudent to build chilled water lines. It's easy to put it in incase it comes up down the road.

Whether it happens or not will hinge on processor performance. If we figure out some magic way so power consumption doesn't increase any more, maybe we won't need it. But the customer wants more performance if he can get it. If the industry keeps pushing on performance, in order to get that level you're going to be looking at liquid cooling. There are customers out there that will not want to hear about it, and that's fine. We'll have to supply an air package for those customers that don't want liquid.

What is the biggest issue facing data center managers attending this conference?

Schmidt: In my view, and I've seen surveys on this and it is clear that heat and power are at the top of the list. Based on surveys I've seen and feedback from customers, they're struggling. The way we used to do it five years ago, we didn't have 30kw racks but we had 5kw racks. The IT manufacturer used to sell the rack and the customer would plug in it, cool it and nobody had a problem with it. Today the customer buys it, the facilities have to implement it, and they are struggling with how to power and cool the IT equipment. The power and cooling, or at the cooling portion really has to be engineered when you have 20 and 30 kW racks. You can't just throw a rack in a data center and hope it works. You've got to get the consultants and IT services arms in there.

The facilities guys in data centers are under lots of pressure today. All these cooling vendors are here and it is real tough to select the proper solution for your particular data center. The customer is bombarded from every different direction. It's a little scary what's going on. Conferences like this can help to provide some direction.

Is there a unit of measure that server vendors will be able to use across platforms for energy efficiency?

Schmidt: This is the one area that the industry is just starting to investigate in the last three months or so and different groups are trying to pull the metrics together on performance per watt, performance per space, etc. And I think every IT company is probably looking at it. I think in the end we probably will come up with something. But the key question is what is the customer looking for and what does he need? We can plot bar charts till you're blue in the face, but what doe the customer really want and what can he use in his evaluation and TCO?

Whatever the customer wants, I think the IT manufactures will be supportive of it. I know we're all involved in this, and it almost sounds like maybe the customers should go off in a separate group and decide and come back to us. We'll sort it out, but I think it's going to be a year of debating before we see a solution acceptable to everyone. I think it will be a useful metric for customers to use in their data center roadmap. This is an interesting topic and one that the industry probably should have looked at a few years ago.


Kevin Dunlap, cooling product manager, APC:

Uptime participants brought up moving away from the tier system to a rightsizing system. What is your take on that?

Dunlap:There was a time and place for a standard with specific tiers. The question today is it's more of a useful measure to look at the business value that you're delivering rather than a tier level. You need to look at the business and what your needs are.

Attendees during a panel suggested that server vendors are improving on efficiency, but the infrastructure cooling guys haven't. Can you address that?

Dunlap: The technology hasn't changed in the sense that we're removing heat using air. But the architecture has changed. We're moving the cooling closer to the heat load and matching capacity. The technology hasn't changed per se, but the architecture has to deal with new problems.

What are the maintenance cost differences for blades vs. racks?

Dunlap: Some of the real advantages of standardizing on blades is a 14% reduction in maintenance cost -- changing out a server and swapping another in. The virtualization is pushing that as well. We're about supporting the equipment. We don't look at equipment as blades vs. other servers. We look at it as heat density and those loads can change from one spot to another.

Let us know what you think about the story; e-mail: Matt Stansberry, Site Editor

Dig deeper on Data center cooling

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close