What are your thoughts on raised flooring, downflow cooling, and lowered ceilings for ducting airflow? I am trying to make a case for a higher ceiling.
Lets take these one at a time.
Raised access flooring is generally used to provide both a place for the quantity of power and communications cable necessary to a data center, and a plenum to convey cool air to the cabinets. But in order to be useful for either, it needs to be high enough. Depending on data center size, and the amount of equipment that needs to be cooled, a raised floor in today's world should be at least 18-inches high, preferably 24 -- 30 inches, to hold the necessary cable bundles without impeding the high volumes of air flow needed to cool modern technology. Few buildings can accommodate those heights unless they have been specifically built as data centers.
Lower floors require special considerations, such as running cable overhead, or using more, carefully located, smaller air conditioners. Our general preference is for no false ceiling in order to achieve maximum room height and flexibility for installation of cable trays, lights, etc. But if inert gas fire suppression is used, a ceiling will probably be wanted to minimize the volume of the room and, consequently, the amount and cost of the gas. In short, there is no single "right" answer to this question.
Downflow cooling, if designed, installed, and operated correctly, will provide an excellent "base cooling" platform for the data center. Basic physics tells us that hot air rises,so cooling from below has two benefits: First, the hot air exhausted from the back of the equipment goes up, where it can most easily return to the air conditioners; Second, pushing cool air from below can help move warm air away from the front air intakes of equipment. But this is far from perfect. Cold air also wants to drop, so getting it all the way to the top of a cabinet is difficult, and obstructions can keep the warm air from getting back to the air conditioners efficiently. High ceilings are best from a thermal standpoint. But again, it is the quality of the design that maximizes performance, not just the type of design that is chosen.
The alternative is overhead cooling. This is the standard method used in office buildings because, in basic terms, as the cold air drops from above it picks up the rising heat on the way down, which tends to keep the room fairly even in temperature from top to bottom. The problem in a data center is getting the cold air where it is needed, and keeping it away from the warm air so the air conditioners get a realistic return air temperature, which is what controls the amount of cooling they provide. An overhead cooling design is best accomplished with duct work, which can get quite large. The alternative is to blow the cold air down the cold aisles, which makes it more difficult to get the hot return air back to the units, since it wants to come from above, but needs to enter the air conditioners from below.
You don't want the air conditioners sucking their own cold air back into their own returns. Its not only wasteful of energy; it also gives invalid temperature information to the air conditioner.
Some designs have used the ceiling plenum as the return air path in order to minimize this problem. This has been done in both overhead and under-floor cooling designs. It can work if the data center is properly sealed so there is no path through the ceiling either into or out of the room. But the warm air must still be brought back to the air conditioners, which can require supplemental fans and additional ductwork that must be coordinated with the overhead cable tray, which is now needed for cable if there is no raised floor.
In short, overhead cooling can become a more complex and expensive design than a good raised floor, but in some buildings it's the only logical way. And, of course, when a ceiling grid is installed, full flexibility for the installation of lights, cable tray and duct work disappears.
Engineering is a business of tradeoffs. You get nothing without giving up something else. There are no "pat answers," but in a data center, it is generally agreed that the higher the ceiling the better, and no ceiling is better still. You asked for my thoughts. I hope they are useful.
Dig Deeper on Data center design and facilities
Related Q&A from Robert McFarlane
Our latest firewall/VPN firmware upgrade left CPU usage at 100%. A malfunctioning DHCP-Server means people aren't getting IPs. I have to pull the ... Continue Reading
Do battery cooling cabinets save money over cooling batteries within the whole data center? Continue Reading
We're setting up a 3,700-square-foot server area with 175 server racks and in-row cooling. It's a greenfield project. How do we estimate power ... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.