Several fundamental factors drive data center design requirements and costs. Get them right before establishing budgets and drafting blueprints. Failure to do so virtually guarantees problems after occupancy.
Fixing mistakes in a data center after it's running is challenging, expensive and operationally dangerous. If the build team uses this data center design guide to properly determine requirements in the beginning, and the design conforms to modern industry standards and practices, major upgrades should not be necessary for many years.
The three most significant factors in a data center design are level of reliability, potential for growth -- positive or negative -- and rate of churn or hardware refresh.
Determining true IT reliability needs
Everyone thinks their systems and applications are mission critical, but the real measure is how the company would fare without them in an outage of any duration. The lost systems could expose security, human life or some other serious risk factor, or it could be measured in lost money and reputation.
The effect of an outage should be quantifiable based on duration: fifteen minutes, a half hour, an hour, two hours, four hours, eight hours or longer. That tells designers how much redundancy to include in the data center, and allows the organization to compare reliability costs with potential exposure.
Too often, the operator specifies Uptime Institute's Tier IV rigorous uptime levels, without a full understanding of what that really means in terms of design complexity, capital cost or operational support. In data centers of significant scale, even if Tier IV is justified, it's probably not necessary for the entire space. Consider zoning the facility, with less critical functions in a Tier III or even a Tier II area.
A realistic assessment of criticality, system by system, should be the first step of any data center design guide -- before any designing takes place. With that information, and an understanding of what is actually driving classifications, the data center designer determines the most appropriate and cost-effective approach.
Even if the facility is designed with uniform redundancies and reliability goals throughout, the process of making determinations around uptime will help prioritize which systems get primary attention for restoration in a major outage.
The trouble with growth predictions
While the cloud provides relief for data centers running out of space for new cabinets, many organizations still keep critical computing under direct control. The data center planning guide should include considerations for on-site moves as well as incremental growth. More than a few companies pulled operations back into their own data centers after experiencing cost and/or performance issues with a service provider.
To make predictions challenging, there is often independent growth in power and heat loads and space, even as IT equipment cabinet counts go down. Smaller IT hardware generally means reduced vertical dimension, but that usually makes the hardware deeper. Standard height cabinets must now be 42" to 48" (1060 to 1200 mm) deep, instead of the legacy 36" (900 mm) depth. Data centers require wider aisles for maneuvering in racks and equipment. Cabinets wider than the legacy 24" (600 mm) accommodate increased cable density, as well as the dual power strips and masses of power cords that go with them, without blocking exhaust air flow. The recommended norm today is a nominal 30" (760 mm) wide. The combined increase in cabinet depth and width requires more floor space even with no actual growth in cabinet count.
The more IT hardware you pack into a cabinet, and the higher performance you get from each device, the more power will be required and the greater the heat density to be cooled. Virtualization and consolidation are the major drivers behind this change to data center design guides. Dense operations require more space for uninterruptible power supplies, power distribution units and air conditioning equipment, much of which is now installed within equipment rack rows. Even if total floor space requirements are not increased by the newer approaches, physical layouts will be.
Growth is particularly difficult to predict in corporations that carry out mergers and acquisitions, as well as in research organizations where grants suddenly inject major computing systems into the facility.
There's no truly accurate prediction of growth for more than a few years ahead, but a realistic assessment of the probabilities will enable modular designs that support elastic scaling over many years. That kind of flexibility is the true measure of a successful modern data center design.
Churn in a data center design guide
Some organizations maintain owned data centers due to a high rate of churn. Financial institutions have short hardware refresh cycles to maintain peak competitive performance. Academic institutions see large research systems show up with little notice. Any enterprise can have segments that change quickly for various reasons. A high rate of churn requires that the data center quickly and easily adjust capacity, usually a hands-on task. The large and frequent fluctuations in space, power and cooling demands drive up hosting facility bills.
Rate of churn is easily quantifiable based on operational history. This information significantly influences the degree of flexibility built into the data center design. Get refresh information right to support changing computing requirements, maintain energy efficiency and minimize energy costs.
Power and heat loads
Once the fundamental requirements are understood for the design guide, establish the actual parameters, starting with power and heat loads.
Avoid the outdated watts per square foot measure -- today's data centers are anything but uniform across the entire space. Designing for averages creates inadequate capacities in some places and overprovisioning in others, as well as unnecessary costs if the entire facility is equipped for maximum projected load.
Develop load estimates by cabinet. Existing cabinet loads are easy to obtain from smart power strips or via an electrician's clamp-on meter. Circuit load measurements from the clamp-on meter are instantaneous and not averaged over time, but still provide a good indication of relative cabinet draws from which a designer can make sizing judgments.
The building plays an unavoidable role in how close you can come to an ideal data center design. Even greenfield buildings have practical limits, but when you must use existing structures, building conditions often wreak havoc with designs and costs. Existing columns interrupt cabinet rows, resulting in inefficient space layouts. Irregular walls shape the layout and reduce floor space efficiency. Floor slabs may require structural reinforcement, or wide spacing of cabinet rows to spread the load. Slab-to-slab height may not allow for a raised access floor to convey air. Room height determines whether the design can use a return air plenum or whether there is enough space to install coordinated overhead infrastructure. If there's no raised floor, the power, cable tray, piping for cooling and lighting all go overhead -- potentially creating conflicts. Windows are a major problem in data centers, and should be removed or covered within building specifications. Freight elevator access is mandatory, as is a clear path to move expensive equipment without risking steep or sharp angles. And, of course, unless the building has sufficient power and access to common carriers for communications, costs will either soar or designs will be forever limited.
Data center properties must always have space available for cooling towers, heat exchangers and generators. These big units also create noise, and designers must take steps to ensure that it does not disturb people in the building or neighbors in close proximity.
There are no stock solutions for data centers. Even containerized modules are customized to some degree. But for purpose-built data centers, the large investment should come with an equally large allocation of time and thought. Follow this data center design guide before budgets are established and certainly before a shovel touches soil.
Pull the plug to test data center systems
An IT service continuity plan that's within reach
The sky isn't falling -- just data center humidity recommendations