Article

Liquid-cooled refrigerants solve density hot spots

Matt Stansberry

When it comes to cooling data centers, even the best infrastructure plans, such as hot-aisle/cold-aisle designs, can't guarantee your servers will stay cool. That's why more companies and vendors are revisiting liquid cooling technology.

Decades ago, companies like Cray and IBM were cooling servers with water, but eventually moved away from the technology. Now vendors are jumping back in the game to meet the cooling requirements of denser, hotter machinery that can't be cooled by air alone.

Columbus, Ohio-based Liebert Corp. is one of the vendors that have taken another look at liquid cooling. The Liebert XD is a supplemental cooling system that sits on top of a rack in high heat density areas of a data center. It pumps a liquid refrigerant that is converted to a gas within the heat exchangers, and then it is returned to the pumping station where it is re-condensed to a liquid.

Other vendors have introduced liquid cooling technologies as well, such as IBM's Cool Blue. Known officially as the eServer Rear Door Heat Exchanger, Cool Blue is a door that hinges to the back of a rack, with a hose installed in the floor that goes up the door. Sealed tubes filled with chilled water remove up to 55% of the heat generated in a fully populated rack, then dissipates it by pulling hot water into the unit so it is not released into the data center.

Another liquid-based cooling vendor is SprayCool from Liberty Lake, Wash.-based ISR Inc., which uses the evaporation of a

Requires Free Membership to View

non-conductive liquid to cool components.

Steve Madara, vice president and general manager of environmental business at Liebert, said providing cooling close to the heat source is important. Overhead cooling on top of the racks is a good way to do it.

For more information:

Cool aid or Band-Aid?

Liebert acquires Cooligy

"When you can't get enough air through the floor, you tend to get recirculation around the tops of the racks," Madara said. "With overhead cooling, you're not taking up any more floor space [with AC units]. Plus, you're saving on energy costs by running the fans in your CRAC units at a lower rate. CRAC units have to pull air through the plenum, cooling coils and filters, and it takes a lot of power."

But you can't get rid of traditional cooling altogether, Madara warns, you still need it to control humidity and filtration. Without filtration, dust and other particles will get inside equipment and act as an insulator and slow down airflow. Humidity needs to be maintained at specific levels as well. If it gets too low, you get static electricity discharge. If it's too high, you get condensation.

But he said running your traditional AC at a lower level, in conjunction with the Liebert XD system, is actually more energy efficient than trying to cool high density servers with the traditional methods.

Madara said you'll need to pipe a room for the refrigerant initially, but using a new quick-connect feature, you won't need a plumber every time you need to change around your server configuration.

There are three different types of liquid-based cooling: water, refrigerant and dialectic fluid. According to Madara, people have experience dealing with water from the water-cooled mainframe days, but there is the possibility of problems from leaks, and it's typically run under the floor, not overhead where many of the overheating problems occur.

Dialectic fluids look like antifreeze and they are nonconductive -- if you spilled it on hardware, it wouldn't short the circuits. But Madara said it's expensive and hard to pump.

Liebert decided to go with refrigerant, R134a, which Madara said costs the same as water when you factor in system efficiency. The chemical itself costs more, but because it's easier to pump, it makes up for the costs in energy efficiency.

Madara said the Liebert XD technology is being used in well-designed hot-aisle/cold-aisle data centers that have problems in high-density areas. "In a typical situation, you wouldn't need these running over the entire facility," Madara said.

Kevin Shinpaugh, director of cluster computing at Virginia Tech's supercomputing lab in Blacksburg, Va., is an IT pro who knows a thing or two about hot-aisle/cold-aisle not cutting it.

Shinpaugh runs the Terascale Computing Facility, which supplies Virgina Tech's research department the computing resources for molecular modeling and other high performance computing functions with thousands of Apple computers.

Shinpaugh said the facility has one hot aisle and two cold aisles, 40 feet long. And when the system is running at full CPU load, it's over 95 degrees in the hot aisle.

The problem with supercomputing is that the equipment has to be so close together. Shinpaugh said running high-speed, low-latency InfiniBand connections, he had 12 meter cable limits.

"If we could have expanded, we could have distributed the heat better," Shinpaugh said. "Every foot of cable is one nanosecond of latency. All these machines are working on the same problem at the same time and passing information back and forth. If a machine is waiting on that message, it's not doing anything else."

With those density requirements, Shinpaugh was faced with extraordinary cooling demands.

"The data center was built in the 1980s with an 18-inch raised floor. We were told that they would have to raise the floor to 40 inches to accommodate the cooling needed," Shinpaugh said. "We had old chilled water piping under the floor from our mainframes, but we couldn't modify it for our use."

That's when Liebert offered its brand new liquid cooling technology.

"Liebert's XD was our only option, and it had just come out. InfiniBand was new. No one was using Apple computers for supercomputing. We broke all of the rules."

That's not to say Shinpaugh didn't face some problems. He said he'd had a leak on a soldered connection, but the service people noticed it right away. He also had his chillers shut down from a firmware upgrade that went wrong. But he said it never takes long to realize there is a problem; the room heats up quickly and the computers go to sleep.

But according to Shinpaugh, it's part of the game for early adopters and the success of the Virginia Tech supercomputing program has been worth it.

Let us know what you think about the story; e-mail: Matt Stansberry, News Editor


There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: