zentilia - Fotolia

Tip

Pick a data center layout: Raised floors vs. overhead cabling

There's no definitive design when it comes to data center infrastructure. Cost, manageability and cooling effectiveness are all factors during the selection process.

In data center infrastructure design, admins may wonder if it's more beneficial to place all cabling overhead to save the cost of a raised floor or pay more upfront to reap raised floor benefits. Some teams might decide a hybrid approach is better and combine the advantages of overhead and raised floor designs.

With or without raised floors, there's lots of flexibility to data center design. An organization's design choice should be the data center layout that provides the most future capability, with the best long-term economy.

A look at raised floor development

Raised access floors started with 1960s mainframes. Bus-and-tag cables, an inch or more in diameter, connected widely spaced boxes. Lengths were fixed, so if admins required 30 feet, they used a 50-foot cable and stored the extra 20 feet under the floor with the heavy power cables. Leased hardware changed with every upgrade, so raised floor flexibility was crucial, but a 6 to 8 inch gap was fine.

In the 1970s, facilities started to use under-floor cooling. Rearranging cables addressed any possible issues with setup; admins could just cut a hole under each raised floor and push the air through.

But mainframes became more powerful while transistor technology stayed the same. IBM introduced water cooling in 1963 with their 367 series machines. Piping went under the floor, and water cooling continued in the data center.

The introduction of complementary metal-oxide-semiconductor transistors dramatically reduced power draw, but organizations continued to use air, despite power and heat load growth far beyond what ushered in water cooling decades before.

Pushing huge quantities of air through a raised floor plenum to hundreds of cabinets with widely varying cooling requirements is challenging. With this data center layout, obstacles can disrupt air flow, and mountains of abandoned cable in most facilities became air dams. To fit air volumes, IT teams required 18 to 24 inch floors, but these problems remained.

Go overhead with a data center layout

Putting power and cable overhead is easy and removes much of the air path disruption. This setup makes the cabling more visible and better organized.

In 2003, the US National Electrical Code added Article 645, which requires the Emergency Power Off (EPO) button at every exit door. This button enables the fire department to instantly cut power to the room and hard crash every computing device.

The 2011 Code offered an alternative location, and protective covers certainly help, but most data center operators would rather not have the EPO button at all. A common workaround is to put power infrastructure overhead.

For power infrastructure in a data center layout, power cables are bulky and come in fixed lengths. Busways are more expensive, but neater and bring more management flexibility. Admins can insert power taps wherever they require, and these taps are customizable for any voltage, phase, breaker rating or connector configuration in seconds. It is critical to select the right busway type and size to achieve long-term reliability and flexibility for power infrastructure.

Dual power feeds can bring high overhead, and should have infrared scanning windows so they can be checked yearly for loose connections. This reduces the need to have technicians climb ladders in protective arc-flash suits to access open wiring.

The main challenge of overhead infrastructure is design and installation coordination. Designs that avoid conflicts between dual-circuit busways, cable trays, fiber ducts and lighting require substantial design detail and field supervision; 3D drawings are mandatory.

Cool without a raised floor

If organizations eliminate the raised floor data center layout, its IT teams must set up different cooling systems, which can be the biggest challenge of all. There are five ways to cool cabinet rows without under-floor air:

  • Overhead ducting to cold aisles, with ducted or ceiling plenum air return from hot aisles;
  • in-row coolers;
  • overhead spot coolers;
  • rear door heat exchangers; and
  • direct liquid cooling to cabinets or computing hardware.

Air ducts must be large -- as big as 4 x 6 feet in each aisle -- to handle the air volume needed for average heat loads and the low velocities necessary for even distribution. Liquid-based cooling setups require insulated piping for refrigerant or water that can be 4 to 8 inches in diameter.

Admins should realize that most of these offerings don't provide humidity control, so they must invest in some form of environmental monitoring and conditioning.

Considerations for a hybrid approach

Hybrid designs use raised floor for most piping, with power and cabling overhead; refrigerant piping for spot coolers must be high. Raised floors also provide level floor support as well as a grid for cabinet alignment.

When it comes to cost, raised access floors are expensive, but overhead air ducts are also costly to design and install. Overhead piping requires special supports and, although leak damage is almost unheard of, drip pans and leak detection cables are often used as well. Overhead systems can end up costing as much as raised floors; the reduction in potential leakage may justify a raised floor investment.

Streamline overhead installation

New products make overhead infrastructure data center layouts much easier. Ceiling grids designed specifically for data centers eliminate the need to drill hundreds of holes into overhead slabs, install beam clamps that disturb fireproofing, and remove ceiling tile penetrations for supporting rods. These grids also reduce the need for methods that produce particulates, which clog filters and reducing cooling effectiveness.

One standard option is a grid that supports 100-pound loads. Beam grid offerings can supports 300 pounds, and can hold above-cabinet spot coolers as well as power, cable and piping. Grid-based tracks accept threaded rod anchors, providing virtually infinite adjustability in all three axes.

These setups help admins suspend infrastructure, including lighting, where it makes the most sense and easily add, remove or relocate hardware to avoid conflicts and provide maximum cabinet illumination.

Dig Deeper on Data center design and facilities

SearchWindowsServer
Cloud Computing
Storage
Sustainability and ESG
Close