BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
Data center cooling trends tend to hog the spotlight, but if you're looking to really improve your data center design, read up on the fads in design and layout.
Trends in data center design range from small changes -- LED lights and white cabinets to improve visibility and energy consumption -- to all-out pre-made data centers dropped on site.
Here's what we'll cover in this article:
- Pre-fabricated cabling
- ToR switches
- Raised access floors
- LED lighting
- White cabinets
- WUE monitoring
- Fog mist fire protection
- Containerized data center
- Pre-fabricated or modular data center
Most every data center will benefit from prefab cabling, which should end the debate over the value of pre-installing a cable infrastructure. Rather than guessing at how much of what cable your data center will need until it is retired, put in what you know you need -- plus reasonable overage for growth -- at the move-in or upgrade date. Then add cable when you need it.
For prefab cabling, manufacturers build the cable harnesses to length based on facility drawings. They deliver quickly, eliminating the mess of technicians cutting wires and leaving scraps in the data center. Usually, the finished installation is neater than on-site cabling work, with high-density connectors and breakouts that tame the wilds of network switches and patch fields. Prefab cables come pre-tested, which is important for a high-performance infrastructure.
Another pro of prefab cable is total cost -- usually lower than site-fabricated cabling, particularly in high-labor-rate regions. Site-fabricated networks will become a thing of the past as data center managers realize the benefits of a prefab infrastructure.
A luxury approach only a couple of years ago, the top-of-rack (ToR) switch is rapidly catching on. The availability of network switches designed and priced for the top-of-rack purposes, and the large numbers of network connections in most cabinets today, make ToR switches the wave of the future. They greatly reduce cable quantities from cabinets back to core network, and they're mainly fiber. ToR consolidation makes installation and changes even easier than with a large cable infrastructure. The switches offer automated reporting, so it's also easier to keep track of how things are connected.
The vulnerability with ToR network designs is that you'll lose an entire cabinet if a switch fails. In high-performance data centers, be sure to use dual path networking. Technicians also tend to mount switches backwards in the cabinets so connectors are in the rear, where server connections also reside. This creates cooling problems for the switches. But ToR hardware is now available that maintains front-to-back cooling while leaving the connections where the technicians want them. In high-density installations, this is the only way to go.
Designing without a raised floor is more common, to the point of becoming a fad. Raised floors still have value, but with so much infrastructure now going overhead (cable tray, power busway, refrigerant lines, fire protection and lighting), and the expanded use of in-row and overhead coolers, they're less useful.
Raised floors do often convey base cooling air, and those concerned about having water overhead want the under-floor space to run chilled water piping out to in-row coolers, rear door coolers and water-cooled servers. But very good designs are working without raised floors. At least half of the data centers designed in the next five years will forgo raised floors.
Certainly not a fad, but rather the way of the future for all of us: LED lighting is bright, well dispersed, highly energy efficient, physically compact and wonderful in the data center. The drawback presently is cost. Return on investment in costly LED bulbs is questionable in most data centers, which should have motion sensor light control in each cabinet aisle. In a "lights out" facility, or even those with limited internal activity, the LED bulb will probably never pay back in energy savings, which could limit adoption. But the lighting quality may be well worth the cost premium, particularly considering how poorly lit most data centers are and the amount of light absorbed by all black cabinets and hardware.
Definitely a fad at the moment that should improve aisle lighting and reduce lighting energy, white cabinets appear in some of the world's highest efficiency data centers. White cabinets may catch on when the industry recognizes their potential for lighting improvement, but until computing equipment changes to a lighter color as well, the cabinets won't help save much energy.
One of The Green Grid metrics, water use efficiency (WUE) recognizes the growing concern over water waste. Unlike its sibling power usage efficiency (PUE), WUE is getting little recognition and measly publicity even from those companies that rushed to PUE. WUE is slowly becoming more important as water use and waste compound already alarming shortages of the vital commodity.
WUE won't catch on like PUE unless some severe, long-term drought results in a government mandate to monitor water. It will probably be used mainly by those who build and operate the most sophisticated and efficient data centers, as well as data centers in water-poor regions.
What used to be a major novelty is gaining both interest and popularity as fog mist fire protection is approved in more jurisdictions. Fog mist uses water instead of expensive gas, but minimizes exposure of equipment to damage, all in a single system that complies with code. Whether fog mist goes beyond a fad depends almost entirely on its acceptance by the Authorities Having Jurisdiction. If it's approved for use in your municipality, check it out.
While the containerized data center has its place and can solve real problems, it is over-hyped. Containerized data centers work for a Google or Facebook that deploys large numbers of similar or identical servers. They scale incrementally so the owners can pretty much close the doors and let them run. Many containerized solutions are not even meant to be serviced. If servers fail, they shut down; plenty of others can take their place.
More data center trends
These aren't the only fads in data centers today. Check out the rest of this series, which covers data center trends in cooling, power, design and performance.
Promising and misleading power trends
Workload optimization trends
The design promotes energy efficiency, but for the data center that needs to remain agile, containers can be very limiting and difficult to work in. The containers are quickly deployed, compared to site-built data centers, but the large power and cooling plants that go along with them are time-consuming and potentially costly to install.
Containerized data centers can provide temporary facilities while permanent spaces undergo renovations, but few companies can afford to buy, support and then abandon the containers. If you acquire containerized data centers, regard them as permanent additions with a planned use.
While containers are not as simple to deploy as they may sound initially, the approach is becoming more sophisticated. Expect adoption increases in certain sectors.
Modular data centers aren't just a kind of container. Prefabs are actual buildings, similar in concept to prefab houses, erected onsite but much faster than custom-building a structure. Even with constraints on size and configuration, modular data centers are far more flexible than containers. If you go this route, look for data center designs that incorporate appropriately scaled space for the power and cooling infrastructure, as well as insulation and structure commensurate with the location and state-of-the art in this field.
Expect prefabs to catch on for remote sites and smaller facilities where fast, low-cost builds are a higher priority than architectural synergies with existing structures. Don't expect a great deal of growth in this segment beyond these niches.
About the author:
Robert McFarlane is a principal in charge of data center design at Shen Milsom and Wilke LLC, with more than 35 years of experience. An expert in data center power and cooling, he helped pioneer building cable design and is a corresponding member of ASHRAE TC9.9. McFarlane also teaches at Marist College's Institute for Data Center Professionals.