Improving facility longevity with flexible data center design

A data center design that incorporates modular or easily expanded infrastructure can improve a facility’s effective life, even as technology changes.

Anyone have a crystal ball? Twenty-five years ago, could we really imagine what computing technology would look

like today, or what data centers would require to support it? I’d like to go Back to the Future in Doc Brown’s DeLorean time machine to see what’s ahead, but that’s only possible in Hollywood. Yet, most requirements for new data centers stipulate designs that are expected to last for 15 to 20 years. Is this a ludicrous requirement? Maybe, maybe not. This tip examines what data center owners can do to ensure their data center design stands the test of time.

Anticipating future data center design
What we know now that we didn’t know 25 years ago is how radically technology can change. The key to data center longevity is being able to accommodate new technology, which means any new facility must be designed with as much flexibility as possible. We have the tools to accomplish this. We just need to know what they are and how best to use them.

Space should be less of a constraint to data center design in coming years. Equipment will continue to get physically smaller while packing more compute power, virtualization will vastly increase levels of consolidation and the cloud will absorb a greater number of routine computing needs. As a result, the explosive demand for floor area that we’ve seen for a decade will likely slow down in end user data centers. Two possible space challenges will be storage and larger cabinet footprints. Much of the former is likely to end up off-site. We can handle the latter by planning good aisle widths.

As long as there’s enough incoming utility capacity, backup power and distribution are now the easiest parts of the infrastructure to flexibly design. Uninterruptable power supply (UPS) architectures allow us to scale from very small to very large not only incrementally, but without shutdowns if we plan well enough. Switchgear, used to transfer from utility to backup power, must be sized for ultimate growth at the outset because it’s custom-built. But the expensive circuit breakers for future expansions can be left out until needed, as long as the data center design uses “draw-out” type devices. Since switchgear doesn’t use energy, there’s no reduction in efficiency when oversizing and underutilizing as there is with UPS and mechanical refrigeration equipment. Power distribution from the UPS to the cabinets can also be very flexible. In-row breaker panels, plug-together wiring, and/or overhead power busway all make it easier than ever to add or change circuits when needed.

Addressing cooling needs and challenges
Cooling can be another matter. We have several ways to make cooling flexible, but making it both flexible and energy efficient is more challenging. Conventional “perimeter cooling” with computer room air conditioners (CRACs) or their chilled water equivalents, computer room air handlers (CRAHs), can now be over-designed without an operating cost penalty. The use of variable frequency drives (VFDs) on fans, compressors and water pumps can actually improve cooling, add redundancy and lower operating costs by running more units at lower speeds and capacities. They will then self adjust as loads increase. Many products will now also intercommunicate to avoid the wasted energy that used to result from one unit humidifying while another was dehumidifying. But the opportunities for modular growth and adjustment in data center design don’t stop there–not by a long shot!

By far, the easiest way to provide flexible cooling is with chilled water systems. Chiller plants are not practical for everyone, but where they are, piping can be sized for growth and provided with extra “spigots” for future connection of a variety of cooling devices. In-row coolers, rear-door coolers, overhead coolers and water-cooled cabinets are all available now, and the future will probably include direct liquid-cooled servers. If unused pipe connections are equipped with shut-off valves and spill-proof “quick connects,” they should be perfectly safe. If you don’t want to install permanent future piping down aisles, you can limit connection ports to perimeter header pipes and extend later using splice-free, flexible PEX tubing, if building codes allow. It’s an easy and proven way to handle changing cooling needs.

To properly accommodate this growth, chiller plants should also be reasonably modular and controlled with VFDs. Redundancy, which reduces the load on each unit, makes this even more important. Operating large chillers at low capacity is not energy efficient. Chiller selections should be made by a mechanical engineer thoroughly familiar with data center design requirements. Modular can mean different things in different circumstances, and not all chillers are enterprise grade.

If chillers are not practical or economical, condenser water can be piped from cooling towers in much the same way as chilled water, and used with modular in-row cooling units. Since cooling towers must usually be lifted to the roof by crane, and require expensive structural steel supports, it’s best to oversize them at the outset. For managers still fearful of water in their data centers, there are refrigerant-based options that offer a range of modular cooling devices similar to the chilled water systems. However, refrigerant piping is significantly more challenging and best to install in advance.

A big plus is that all these modular cooling systems are highly energy-efficient, close-coupled, source-of-heat technologies. Therefore, they provide flexibility while contributing to green data center design and reducing operating costs. Even better, water-based systems can take advantage of the additional energy savings of water-side free cooling in most climate zones.

Flexibility is a little more challenging with direct expansion (DX) systems, where every cooling unit is independent and has its own refrigerant piping to the outside. With planning, pathways, and even refrigerant piping, can be installed to locations expected to require perimeter CRACs or in-row devices in the future. There’s even a modular approach that installs with a DX-type control unit, delivers cooling via overhead refrigerant piping and supports a variety of cooling devices that can be connected, added or moved with relative ease.

In short, designing for flexibility requires a lot of thought and planning, but adds little to an initial capital budget. It saves enormously by improving facility longevity, thereby reducing the need for future construction and upgrade costs. Techniques like free cooling and server consolidation can also reduce operating expense.

About the expert:Robert McFarlane is a principal in charge of data center design for the international consulting firm Shen Milsom &Wilke LLC. McFarlane has spent more than 35 years in communications consulting, has experience in every segment of the data center industry and was a pioneer in developing the field of building cable design. McFarlane also teaches the data center facilities course in the Marist College Institute for Data Center Professional program, is a data center power and cooling expert, is widely published, speaks at many industry seminars and is a corresponding member of ASHRAE TC9.9 which publishes a wide range of industry guidelines.

This was first published in August 2011

Dig deeper on Data center design and construction

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close