Get started Bring yourself up to speed with our introductory content.

A data center design guide to get it right the first time

Corporate moves, rapid IT refreshes, virtualization initiatives and other factors affect your data center design. Use this data center planning guide for a facility that lasts.

Several fundamental factors drive data center design requirements and costs. Get them right before establishing...

budgets and drafting blueprints. Failure to do so virtually guarantees problems after occupancy.

Fixing mistakes in a data center after it's running is challenging, expensive and operationally dangerous. If the build team uses this data center design guide to properly determine requirements in the beginning, and the design conforms to modern industry standards and practices, major upgrades should not be necessary for many years.

The three most significant factors in a data center design are level of reliability, potential for growth -- positive or negative -- and rate of churn or hardware refresh.

Determining true IT reliability needs

Everyone thinks their systems and applications are mission critical, but the real measure is how the company would fare without them in an outage of any duration. The lost systems could expose security, human life or some other serious risk factor, or it could be measured in lost money and reputation.  

The effect of an outage should be quantifiable based on duration: fifteen minutes, a half hour, an hour, two hours, four hours, eight hours or longer. That tells designers how much redundancy to include in the data center, and allows the organization to compare reliability costs with potential exposure.

Too often, the operator specifies Uptime Institute's Tier IV rigorous uptime levels, without a full understanding of what that really means in terms of design complexity, capital cost or operational support. In data centers of significant scale, even if Tier IV is justified, it's probably not necessary for the entire space. Consider zoning the facility, with less critical functions in a Tier III or even a Tier II area.

A realistic assessment of criticality, system by system, should be the first step of any data center design guide -- before any designing takes place. With that information, and an understanding of what is actually driving classifications, the data center designer determines the most appropriate and cost-effective approach.

Even if the facility is designed with uniform redundancies and reliability goals throughout, the process of making determinations around uptime will help prioritize which systems get primary attention for restoration in a major outage.

The trouble with growth predictions

While the cloud provides relief for data centers running out of space for new cabinets, many organizations still keep critical computing under direct control. The data center planning guide should include considerations for on-site moves as well as incremental growth. More than a few companies pulled operations back into their own data centers after experiencing cost and/or performance issues with a service provider.

To make predictions challenging, there is often independent growth in power and heat loads and space, even as IT equipment cabinet counts go down. Smaller IT hardware generally means reduced vertical dimension, but that usually makes the hardware deeper. Standard height cabinets must now be 42" to 48" (1060 to 1200 mm) deep, instead of the legacy 36" (900 mm) depth. Data centers require wider aisles for maneuvering in racks and equipment. Cabinets wider than the legacy 24" (600 mm) accommodate increased cable density, as well as the dual power strips and masses of power cords that go with them, without blocking exhaust air flow. The recommended norm today is a nominal 30" (760 mm) wide. The combined increase in cabinet depth and width requires more floor space even with no actual growth in cabinet count.

The more IT hardware you pack into a cabinet, and the higher performance you get from each device, the more power will be required and the greater the heat density to be cooled. Virtualization and consolidation are the major drivers behind this change to data center design guides. Dense operations require more space for uninterruptible power supplies, power distribution units and air conditioning equipment, much of which is now installed within equipment rack rows. Even if total floor space requirements are not increased by the newer approaches, physical layouts will be.

Growth is particularly difficult to predict in corporations that carry out mergers and acquisitions, as well as in research organizations where grants suddenly inject major computing systems into the facility.

There's no truly accurate prediction of growth for more than a few years ahead, but a realistic assessment of the probabilities will enable modular designs that support elastic scaling over many years. That kind of flexibility is the true measure of a successful modern data center design.

Churn in a data center design guide

Some organizations maintain owned data centers due to a high rate of churn. Financial institutions have short hardware refresh cycles to maintain peak competitive performance. Academic institutions see large research systems show up with little notice. Any enterprise can have segments that change quickly for various reasons. A high rate of churn requires that the data center quickly and easily adjust capacity, usually a hands-on task. The large and frequent fluctuations in space, power and cooling demands drive up hosting facility bills.

Rate of churn is easily quantifiable based on operational history. This information significantly influences the degree of flexibility built into the data center design. Get refresh information right to support changing computing requirements, maintain energy efficiency and minimize energy costs.

Power and heat loads

Once the fundamental requirements are understood for the design guide, establish the actual parameters, starting with power and heat loads.

More than a few companies pulled operations back into their own data centers after experiencing cost and/or performance issues with a service provider.

Avoid the outdated watts per square foot measure -- today's data centers are anything but uniform across the entire space. Designing for averages creates inadequate capacities in some places and overprovisioning in others, as well as unnecessary costs if the entire facility is equipped for maximum projected load.

Develop load estimates by cabinet. Existing cabinet loads are easy to obtain from smart power strips or via an electrician's clamp-on meter. Circuit load measurements from the clamp-on meter are instantaneous and not averaged over time, but still provide a good indication of relative cabinet draws from which a designer can make sizing judgments.

Building influences

The building plays an unavoidable role in how close you can come to an ideal data center design. Even greenfield buildings have practical limits, but when you must use existing structures, building conditions often wreak havoc with designs and costs. Existing columns interrupt cabinet rows, resulting in inefficient space layouts. Irregular walls shape the layout and reduce floor space efficiency. Floor slabs may require structural reinforcement, or wide spacing of cabinet rows to spread the load. Slab-to-slab height may not allow for a raised access floor to convey air. Room height determines whether the design can use a return air plenum or whether there is enough space to install coordinated overhead infrastructure. If there's no raised floor, the power, cable tray, piping for cooling and lighting all go overhead -- potentially creating conflicts. Windows are a major problem in data centers, and should be removed or covered within building specifications. Freight elevator access is mandatory, as is a clear path to move expensive equipment without risking steep or sharp angles. And, of course, unless the building has sufficient power and access to common carriers for communications, costs will either soar or designs will be forever limited.

Data center properties must always have space available for cooling towers, heat exchangers and generators. These big units also create noise, and designers must take steps to ensure that it does not disturb people in the building or neighbors in close proximity.

There are no stock solutions for data centers. Even containerized modules are customized to some degree. But for purpose-built data centers, the large investment should come with an equally large allocation of time and thought. Follow this data center design guide before budgets are established and certainly before a shovel touches soil.

Next Steps

Pull the plug to test data center systems

An IT service continuity plan that's within reach

The sky isn't falling -- just data center humidity recommendations

This was last published in December 2015

PRO+

Content

Find more PRO+ content and other member only offers, here.

Essential Guide

Essential guide to prefabricated and micro datacentres

Join the conversation

3 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How long have you used your current data center facility?
Cancel
Our Data Center was commissioned in 2004 and is now 11 years old. It has two incoming power mains with two sets of UPS (with gen-sets). Total floor space is 1400 sq feet and is almost 70% occupied.There is space available for 8 more standard IBM racks. There are 4 CRAC units (3 running, 1 stand-by). Most of the systems have been upgraded to VMs. Power utilization is in the range of 70%-75% of total capacity. Further decommissioning of old servers will help improve this.
Cancel
Hi

I would like to understand how the ASHRAE standards compare and comply to (ETSI) EN 300 019 Class 3.1; for normal and exceptional conditions.

Traditionally network hardware is tested and warrantied to the extreme environments as set out in (ETSI) EN 300 019 Class 3.1. As illustrated below http://tc99.ashraetcs.org/documents/ASHRAE_Extended_Environmental_Envelope_Final_Aug_1_2008.pdf,

The extended thermal envelopes for ETSI/NEBS and ASHRAE my be very similar but what appears to remain very different is the level of filtration of external contaminants required for traditional Telecom environments and hardware.

I am currently working in a TELECOM environment in Central London (new build) where the cooling methodology is based on direct air side economisers utilising fresh air as the primary cooling source, the level of filtration is very low on the AHU units with the highest grade filter being G4 or 40% Atmospheric dust Spot efficiency. So as you can imagine in Central London there is a very high level of contaminant within the rooms especially from Carbon sources.

As you say these new environments now not only host Telco equipment (Core Router/Switches) but also server hardware. Whilst the AHU can and will control the environment with the hardware thermal envelope for servers and network hardware, what it cannot do is maintain the environment with the contamination levels required for server hardware.

Its apparent that these Telecom rooms have traditionally been designed and built to provide a temperature controlled environment only with contamination levels not a major concern as the Network equipment manufactures warranted the hardware for some what extreme levels of contaminant and this methodology appears to have continued even though the rooms now not only hold Telco hardware but also Server technology.

How do the two standards compare for contamination levels and how should this situation be addressed in the rooms I have referenced, I have raised it as an operational and business risk as I do not believe the environment is fit for purpose for servers.

I would appreciate your advice and input.

Regards,

Willaim101.
Cancel

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close