It's an impossible task, but it has to be done: predicting future data center space, power and cooling requirements....
It has never been easy, but the advent of cloud computing has made infrastructure capacity planning even more difficult.
Data centers are expensive to build, and renovating, upgrading or expanding is disruptive and potentially dangerous.
Space is always hard to come by, so you don't want to ask for more than you can justify. That goes for power and cooling, too: Associated electrical and mechanical equipment requires space -- sometimes just as much or more than the compute area -- and is the most costly part of expansion. Claiming you need more kilowatts than you'll ever use could price a data center project out of consideration, or lead to regrettable cost-saving measures. Inaccurate capacity estimates could be career ending at worst and a burden on the IT infrastructure's performance at best.
No one can do it 100% accurately, but there are guidelines for analyzing infrastructure capacity problems and developing realistic, defendable estimates of future need. If you take a reasoned approach to capacity growth, you can justify these demands.
IT floor space
IT people tend to think in terms of cabinet counts, but its floor space that you should know. How much space does a cabinet really take up? There are a number of ways to answer that question, so pick one format that makes sense and use it consistently.
The actual cabinet occupies a specific amount of floor area of width times depth, but even that is changing. Cabinets are getting larger to accommodate higher equipment densities, deeper server form factors and multiple cable connections. Dimensions 30" wide by 48" deep (760 by 1,200 mm) are no longer unusual.
The actual cabinet footprint, as defined by ASHRAE TC 9.9, also includes the aisle space around the cabinet (See figure 1). Structural floor loading and heat density involve the cabinet and the area around it. If a design calls for 4' (1.2 m) aisles flanking a 30" x 48" cabinet, the cabinet footprint measures 2.5' x 8' or 20 square feet (1.83 square meters). You could compute cabinet and aisle areas separately, but this approach makes it easier.
The total working IT area is the number of cabinets multiplied by the cabinet footprint. To include modern power distribution equipment, add another large cabinet per row.
Account then for the supporting cooling equipment and end of row aisles required to move equipment, provide service and meet fire safety codes. The main aisles plus cooling can easily add 75% to 100% to the floor space computed from the cabinet footprint. Really efficient designs may need less space. Conversely, poorly shaped or column-filled rooms can take up more square footage. With these general numbers, however, you won't grossly under- or over-estimate.
For higher accuracy, choose a power and cooling approach for the new space in advance. For estimating purposes, however, the differences in options are not substantial: In-row coolers are like additional cabinets, but should eliminate most of the perimeter computer room air conditioner (CRAC) space; rear door heat exchangers add about 6" (150 mm) to cabinet depths, increasing the cabinet footprint, and yet eliminate most standard CRACs.
Try to predict how many cabinets of IT capacity will go to a hosting or colocation site, or that will be replaced by cloud instances. Start with a realistic assessment of your company's propensity to look at outside services. Have management -- including financial management -- involved in developing estimates to share business directions and ensure support for the end product.
Ask these guideline questions for a realistic estimate:
- What is your churn rate, and in what systems? Equipment that frequently changes is less likely to go off-site. Hosting sites charge a lot of money to change infrastructure requirements, so stable operations are much more cost-effectively outsourced. Depending on what is changing, cloud services may be the more adaptable choice than in-house hardware. List your systems, and note the likelihood of each to leave the data center.
- What is your operation's risk tolerance? Operations that are highly secure and risk-averse are less likely to consider cloud options. Hosting sites may have less reliable backup power, cooling and connectivity than your existing data center. Separate your systems list into risk levels for analysis.
- In operations with large storage requirements, either archival or mirrored, is backup storage a good candidate for off-site location?
You have the power
This article uses actual power draws to describe cabinet densities. Actual power draw is much less than the total of nameplate ratings. Nameplate ratings can mislead data center designers into provisioning 40% to 60% higher power availability than is consumed in reality. Follow these steps to properly size UPS systems.
Power, cooling and density
IT infrastructure space doesn't include room for facility infrastructure: uninterruptible power supplies (UPS), chillers, pumps, master power centers, generators and other central equipment. As a rule of thumb, estimate at least another 50% of your total data center area for an Uptime Institute Tier II facility, 75% for Tier III operations, and at least 100% more for Tier IV.
Compaction -- data center equipment packing more power into smaller form factors -- increases the amount of power and cooling needed per cabinet. It makes little difference in actual space requirements because, although equipment continues to get smaller and more powerful, we acquire more of it.
Density isn't running rampant. Despite predictions of 25 kW to 40 kW cabinets, only a small percentage of data center racks exceed 8 kW to 10 Kw, and most are still in the 5 kW to 8 kW range. So unless you're a research entity running high-performance computing, a full room of 35 kW cabinets is unrealistic.
Plan power realistically. If your cabinets are equipped with metered power strips, and particularly if you're using data center infrastructure management software to record the power draws from each cabinet over time, you can accurately determine actual loads.
If you're unsure of current power draw, there are three ways to estimate it:
- Read your UPS monitor panel. Divide the total load by the number of cabinets for average watts per cabinet. If you run a 2N UPS configuration, each UPS carries only half the actual load, so read both systems, add them together, then divide by cabinet count. Also account for abnormal cabinets, such as high-utilization blade servers that could run at 12 kW per cabinet.
- Look at the circuit breaker ratings in the branch circuit panels. Circuit breakers should be loaded to only 80% of rating on a continuous basis. Use chart 1 to determine the maximum capacities of the cabinets' circuits. If your cabinets are dual-circuited, with power coming from two different panels and circuit breakers, maximum load is based on only one of them.
- Have an electrician measure the actual load on each branch circuit with a clamp-on meter. These are instantaneous measurements, not accounting for fluctuations over the course of a day, but help estimate real cabinet loads. For dual-circuited cabinets, add the loads from the circuit breakers to each cabinet.
- Group highest-density cabinets together for space predictions. Rather than designing the whole data center for this level, partition it into high- and normal-density requirements to reduce cost and floor space. Add 25% to the floor area for true high-density cabinets (15 kW or more) to account for the additional power and cooling requirements. Add another 25% if you use fully redundant 2N cooling systems.
Design for change
Good design enables you to add capacity units to the infrastructure without operational impact. This reduces initial capital budget, allows capacity increases in-line with business moves, and results in higher energy efficiency.
Insist on cost estimates for a modular, staged design. There is no reason to install maximum predicted UPS and cooling capacity to support operations on day one. It will inflate your budget.
Some things must be completely installed before operations in the new space begin. The piping and main wiring has to be there, or else you'll be doing heavy work inside an operating data center when utilization grows. Post-expansion work in the electrical/mechanical support area might demand an IT shutdown, which quickly negates any success in your infrastructure capacity plan.
About the author:
Robert McFarlane is a principal in charge of data center design at Shen Milsom and Wilke LLC, with more than 35 years of experience. An expert in data center power and cooling, he helped pioneer building cable design and is a corresponding member of ASHRAE TC9.9. McFarlane also teaches at Marist College's Institute for Data Center Professionals.