DALLAS -- Ditching your raised floor in favor of newer overhead cooling technologies might not be such a good idea,...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
according to a study by two researchers at IBM.
"Hot spots are a major concern for data centers," said Roger Schmidt, an engineer at IBM, last week during the winter conference for the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE). "The question is how do you cool those hot spots in the racks."
The study, performed by Schmidt and Madhusudan Iyengar, another IBMer, looked at two high-density data centers, each with four rows of 32kW racks in a hot-aisle, cold-aisle configuration. The raised-floor setup featured four CRAC units and perforated tiles in the cold aisles; the overhead cooling arrangement used two overhead diffusers to supply the chilled air and four return vents to suck in the hot air.
The conclusion: With chillers operating at 100% and 80% levels, raised-floor cooling resulted in cooler rack intake temperatures. At 60% levels, overhead cooling did better.
But Robert Sullivan, a senior consultant with The Uptime Institute, suggested that overhead cooling's superior results at 60% was moot, since at that level, the air probably isn't cold enough to be safe for IT equipment.
True, "at the reduced airflow rate, the overhead (cooling setup) had more cabinets with lower temperatures," Sullivan said. "The thing you have to add is that all the temperatures are higher."
IBM's Schmidt agreed, saying that "when you really reduce airflow rate, none of those temperatures are probably acceptable to IT manufacturers."
In their study comparing the two methods, the researchers showed thermal images that demonstrated why the overhead cooling method was inferior at high airflow power levels. Hot air leaving the servers into the hot aisles was more easily able to travel up and around the rows of servers and mix with the cold air in the cold aisles, leading to higher intake temperatures for the servers.
With under floor cooling, where the chilled air was shooting up from perforated tiles, the air was more dispersed and better able to fight against the hot air recirculating from the hot aisle.
Schmidt added during the presentation that a lower density data center would probably yield different results.
A lot of this is driven by total heat load," he said. "Airflow is large. I think my guess is it might be a lot different."
Overhead cooling on the rise
The study compared the two setups from a purely technological point of view and did not take into account infrastructure considerations that go into deciding whether to build a data center for raised floor or overhead cooling.
Speakers at the ASHRAE conference, for example, said that raised floors should be built at least two feet over and above the space required by cabling, piping and other obstructions. Otherwise, airflow could be impinged. But not many buildings fit this description unless they were specifically constructed as data centers.
For a long time, raised floors were the de facto data center standard. But in the late 1990s, nonraised floor environments came into vogue, as some data center managers felt that handling IT equipment was easier if the cabling was overhead, and the influx of smaller servers made water piping unnecessary. Overhead cooling equipment, such as Liebert Corp.'s XD or Data Aire Inc.'s ceiling mounted products, followed close behind.
The Lawrence Berkley National Laboratory also has its recommendations for best airflow practices in the data center; ASHRAE's Technical Committee 9.9, of which Schmidt is chairman, has recommendations as well.