Article

When best practices aren't: CFD analysis forces data center cooling redesign

Mark Fontecchio
Data center best practices are supposed to be exactly that: best practices. But for Lab 7D, a 7,000-square-foot data center that networking giant Cisco Systems Inc. runs in San Jose, Calif., for testing and quality assurance, best practices were anything

    Requires Free Membership to View

but.

For more on data center best practices:
The green data center 2.0: Beyond best practices

EPA sends final report on data center energy efficiency to Congress

Hot-aisle/cold-aisle containment and plenum strategies go big-time

Lab 7D is a busy place. Engineers perpetually load and unload equipment in and out of the room to test features of Cisco's MDS storage area networking (SAN) switches. Some 100 engineers work among the approximately 500 IT equipment racks, and each one is responsible for a particular feature of the switch, such as the ability to write to two hard disks simultaneously.

The amount of equipment turnover and the number of bodies in the room combine to make Lab 7D an atypical data center. But like many data centers, it is also running out of power. According to Chris Noland, who oversees the facility, the lab was the No. 2 consumer of electricity on the San Jose campus, generating $150,000 a month in power costs, or $1.8 million a year.

There is even some debate about whether hot-aisle/cold-aisle containment is a best practice.
,

"When we found out how much we were using, we told the general manager of the group and he said, shut off power wherever you can," Noland said. "So it was more of a monetary thing."

As a first step, they shut off redundant power supplies, which were deemed unnecessary in a testing environment. For the same reason, the data center has no uninterruptible power supplies (UPSes). Those steps saved the data center 10% in energy costs, but Noland still sought additional savings.

Exploring hot-aisle/cold-aisle containment
Cisco's data center was already set up in a hot-aisle/cold-aisle configuration, complete with perforated tiles in the cold aisle and, in Cisco's case, ceiling vents in the hot aisle. Looking to improve on this setup, Noland talked to Pacific Gas & Electric, the main utility company in San Jose, and the Lawrence Berkeley National Laboratory about cold-aisle containment.

Hot- and cold-aisle containment has gathered steam as a way to isolate the hot- and cold-air streams in a data center, which in theory make cooling the IT equipment more efficient. But there is some debate about whether hot/cold-aisle containment is a best practice.

In Lab 7D, there are seven 30-ton computer room air conditioners (CRACs) supplying cold air to the equipment. Noland walked around the lab and noticed that some of the CRACs operated at 100%, while others operated at just half that. He figured that if the room were designed correctly, for every two CRACs operating at 50%, he should be able to shut one off. Noland wanted to make sure that air got where it needed to go and figured that cold-aisle containment could help his cause.

Noland also considered implementing hot-aisle containment and installing blanking panels in the IT equipment racks. In addition, the lab was running a homegrown program that shut off unused IT equipment at night.

Data center simulation time
Before implementing hot-aisle/cold-aisle containment, Noland decided to run some simulations. He called in Future Facilities, a software company that runs computational fluid dynamics (CFD) airflow simulations in data centers.

Noland was unhappy with the results.

"To be honest, I was a little upset with Future Facilities," Noland says, only half-joking. "I just wanted [them] to confirm that we were right."

Future Facilities' CFD analysis found that the lab's CRAC units didn't supply enough air for the equipment. As a result, a good deal of the IT equipment took in air from other IT equipment's exhaust air and created a lot of air mixing. These conditions meant CRACs had to pump out much colder air than was necessary and wasted energy.

So by itself, cold-aisle containment wasn't an option. By isolating that air stream, some equipment in that aisle – which would normally take in air from the exhaust of IT equipment in other rows – would be short of cool air and overheat .

"The IT equipment required about four times more cubic feet per minute (CFM) than was available," said Sherman Ikemoto, the North American general manager of Future Facilities. "Chris Noland was unaware of this situation."

For similar reasons, the Future Facilities software found that hot-aisle containment also wouldn't work. And besides, Ikemoto said that if the data center deployed hot-aisle containment in its existing lab, it would have to reconfigure the sprinkler system in accordance with the fire code. That could cost up to $150,000.

Not only was hot/cold aisle containment a bad idea, but the CFD analysis showed that even the hot/cold aisle configuration and blanking panels were falling short. While servers and most other IT equipment have a front-to-back airflow, Cisco equipment intakes air from just about everywhere – from the front, the back, the sides, and even the top and bottom .

"They are the ultimate recyclers," Noland said. "They will use air from everywhere to cool the equipment."

The limits of hot-aisle/cold-aisle containment
The only best practice that is bound to work in the Cisco lab is shutting off equipment at night. Other techniques can be used in a limited capacity.

Noland has begun setting up a new lab that he will configure as follows: Any IT equipment that has front-to-back airflow will have its own dedicated area within the data center. That portion of the lab will use blanking panels, a hot-aisle/cold-aisle configuration, and hot-aisle/cold-aisle containment.

Equipment with side-draft and other airflow-intake designs, on the other hand, will sit near the center aisles of the labs and run without any of these so-called best practices.

"We're still looking at developing best practices for side-draft," Noland said. "We're looking at some venting options. There are also some rack options which essentially turn side-draft into front-to-back flow, but the only thing is that takes up space."

Noland may have been partly joking when he said he wasn't happy with the CFD results. But in the end, the simulations helped to "show us the light and turn around a couple schemes. It's unfortunate, but it's the truth."

Let us know what you think about the story; email Mark Fontecchio, News Writer. You can also check out our Data Center Facilities Pro blog.


There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: