Data center liquid cooling vs. forced air cooling

SearchDataCenter.com recently visited American Power Conversion (APC) Corp.'s Research and Development facility in Billerica, MA, to get a first-hand look at its approach to data center cooling. During the visit we asked American Power Conversion Corp. Chief Technical Officer Neil Rasmussen about his thoughts on data center liquid cooling. This is an excerpt of that conversation.

People are talking about data center liquid cooling as a new thing, but it actually has a long history in the data center, doesn't it?

People talk about the history of air and liquid cooling in short hand. The idea that some people don't have liquid cooling is preposterous. The liquid could be a Freon, water, or glycol antifreeze mix. All cooling is liquid cooling; it's just a question of where the liquid is. 

Mainframes had a specific problem where the density of heat was so great that they couldn't get it out with air. Air has limitations. It carries much less heat than liquid does for the same volume. As computers get smaller and their density goes up, at some point we won't be able to cool them with air anymore. They will have to be cooled with liquid directly. That's what happened in mainframes. They were forced to go to liquid. No one wanted to run pipes to all these devices, but they had to.

We actually reversed from that because the power densities and the power consumption of the cores actually fell from the mainframe days. When it fell, it allowed people to use air cooling again. So why did data centers switch to forced air cooling?

The great thing about air cooling is that everybody has air. If a server uses air cooling, it can be set up and left alone and it will take care of itself. Where'd the air go? It dissipated. Using air makes the problem simpler.

If every server used water, it would be useless until someone ran a water pipe. Air cooling advantages are huge in comparison because I can just put the server anywhere. Every few feet we have an electrical outlet, every few feet we have an air duct. Air is everywhere, but not every place has water. We don't have water pipes hanging out of our walls that we can just jack into. But now we're running into a density problem. Why is that?

Conventional data centers, which have been built the same way for 30 years, are big rooms with raised floors that run out of capability at around 5 kW per rack over a sustained area. 10 years ago, no racks took over 5 kW. But now I can get a completely loaded blade server rack from any vender that draws 25 kW. No one anticipated that kind of power density. The problem we have today, and the reason people are starting to get concerned, is that the conventional architectures for air cooling are being overrun by the power densities.

What's going on in the market today is that all the corporate applications are moving to blades. There's performance advantages, ease of deployment, ease of provisioning, the list goes on. But there's an incompatibility between these blade servers and the conventionally designed data center. Are data centers turning to liquid cooling to solve that problem?

There are two basic approaches in the market to solving this problem. One is to redesign all the servers and put water pipes on them because water handles more cooling density. That's what the mainframes did. There's a lot of discussion about that because there's a history of using this approach.

That would be really good for a supercomputer, where you have row after row of identical blade servers is a big fixed installation. When Lawrence Livermore [National Laboratory] turns out a computer room, they install 5,000 servers on day-one, turn on the switch, and run it for four years. Then they shut it off and put in the next set. That is a great application for direct water cooling because it's static for years at a time. Will that approach work for the traditional data center?

Direct water cooling is a bad application if you're switching things in and out, adding a piece here and there, and moving things around all the time in a business responsive mode. There's a lot more chaos in that environment.

One of the biggest complaints I get when I talk to data center operators in businesses is that the equipment that they're told was going to be in the data center is not what they ended up with. Ask the data center designer after it's been deployed what percentage of the original list of equipment actually went into the data center and what percentage were surprises. A typical response is that over 50% were surprises.

Everyday the servers are changing. It's a much more difficult environment to plan a structured cooling system. Furthermore, not everything is a server in a data center. There are routers, patch panels, storage, etc. There is a dynamic hodgepodge where it would be very impractical to plan water piping. The flexibility to buy whatever equipment you want is also important. Today you can't buy a server with a water pipe connection. Even if everybody started working on it now, it would be years before the average user has a suite of products to use. What is your vision on the future of data center cooling?

We believe that air cooling can work to a lot higher than 5 kW per rack. We believe that we can go to 25 kW. At 50 kW per rack, we would all agree it's done. There are some people saying that blades are going to 50 kW per rack, but I don't believe that's going to happen. Over the past couple of years everyone has been reducing the power consumption of their chips, which is a complete departure from what we saw five years ago. My opinion is that it's going to hold rack level power at 25 kW or less.

The floor is the problem. I can make an air conditioner any size I want, but the problem is getting the air through the tiles. If I try to moving 25kW of air through tile, it's going to be 120 mph coming through floor tile. It's also very inefficient to push all that air over a distance. It takes a tremendous amount of horsepower to move it around. It's not uncommon to find just the fan taking more power than the servers in data centers.

Our strategy is to get the air conditioners closer to the IT equipment. We have a bunch of different ways of doing that depending on what density you want to hit.

Let us know what you think about the story; email: Adam Trujillo, Assistant Editor

Dig deeper on Data center cooling

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close