Whether sizing uninterruptible power supply (UPS) or an air conditioner, you need to know what your data center...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
power load really is. Data center engineers often overstate the power load because they don't know how to estimate it or are afraid of running short. But when numbers are high, the result is wasted energy, wasted money and -- believe it or not -- less effective cooling. And more often than not, IT departments give engineers outrageously high load requirements. Let's see why.
The heart of the problem is understanding what "real load" actually is. That's where the process has commonly gotten off track, and it is still a mystery to most people.
I'm sorry to bring back the painful memories of high school physics, but you may recall the basic Ohms Law formula: watts = volts x amps. This formula is correct for direct current (DC) circuits, but not quite accurate for alternating current (AC), which powers nearly all of our data centers. In another article we'll discuss why and when it matters, but the error is small for most of today's computing hardware, so we'll ignore it for now and go on to bigger things. (If you have a large data center, the error can add up, but you should have people on staff who know all about it.)
So if we can still assume that watts = volts x amps, where's the problem? It goes back to the old programmer's acronym, GIGO -- If you put garbage in, you will definitely get garbage out. What's important is where those numbers for volts (V) and amps (A) numbers for the formula came from. If they numbers are wrong, as they usually are, the result will be wrong as well.
Every piece of electrical equipment is required to have a nameplate that lists the operating voltage and current (amperes). Sometimes it also has the wattage, but usually not. It may not be in plain sight, but it's always there, attached, silk-screened or stamped into the metal. It's common for people to take these numbers from each piece of equipment (or the data sheets, which often have the same information), multiply V by A, add them up and tell the design engineer that this is the expected load. Unfortunately, those numbers can easily be 40% to 60% high. Electrical engineers have tended to assume this and factor the numbers down by that amount. Not a very accurate way of doing things, is it?
So why are these numbers so far off? The answer lies in how nameplate ratings are developed. Nameplate amp is the highest current a device can draw, when fully configured with every possible option, operating at 100% utilization, and at the lowest voltage at which it will still operate. We know that almost nothing runs flat out at 100%, 24/7, with every possible board and drive installed. But that's only part of the error. It's that last piece of the definition that makes the biggest difference.
You might recall that current (amps) is inversely proportional to voltage. (There's that darned physics again -- you were told that some day it would be useful.) Since the watts consumed by a piece of equipment stays pretty much constant regardless of voltage, then the amps have to change -- so when the voltage goes down, the amperage goes up. Most equipment will operate all the way down to 90 volts or even lower, and it is that voltage on which the stated amperage is based.
When you take amps off the nameplates and do the simple volts x amps multiplication, what voltage do you use? If you're in the United States, you use 120 or 208 V, because that's what our UPS systems deliver -- steadily and consistently. But when you use nameplate amps and multiply it by 120 V, you're probably at least 33% high, but that's not all. Manufacturers can round off nameplate amps to the next higher number, so 2.4 amps can quickly become 3 amps, creating another upsizing of unknown magnitude. And we still haven't considered the hardware configuration or usage level. It should be pretty clear how quickly the numbers can grow to outrageous levels, so this is obviously not the best way to determine your data center power draw.
Some manufacturers are providing better data, if you can find it, and one way is to use the online "configurators" available through most of the major manufacturers. If you have the time and patience to use them, they give you accurate power draws for specific hardware configurations, but they are generally pretty cumbersome and are actually more exact than we need for estimating data center power requirements. Plus, they are generally based on 100% utilization, which is a rare occurrence for any machine. It's even more uncommon for every piece of hardware in the data center to be running at 100% simultaneously. So let's look at what might be a more practical way.
What about hardware power supply ratings? A 500 W supply is not going to deliver more than 500 W, so right away you at least have a "limit" figure. Power supplies rarely run at more than 80% of capacity because they are the most failure-prone components in most electronics equipment, so they're usually overrated just to minimize failures. Most power supplies have also been historically inefficient (about 75%), which means that to get 500 W out, you would need to put 667 W in. Newer supplies are closer to 90% efficient, which means only 555 W in for 500 W out. For simplicity, let's assume that the efficiency factor will be pretty much offset by the utilization percentage. So using the power supply ratings for each piece of hardware will probably give you a reasonably good maximum draw for your data center. Actual draw will still be less.
A word of caution: Most modern hardware is dual corded, which means it has two power supplies that are supposed to be plugged into different power sources. In normal operation, these power supplies load share, which means that each supplies only half the load. Do not add up the capacities of the two supplies. A server with two 500 W power supplies is still a 500 W unit because either supply must be capable of instantaneously supporting the full load if the other fails. (For devices with more than two supplies, such as large network switches and blade server systems, you will need to determine how many of those supplies are required as a minimum to keep the device in operation, and add up those. In these cases, it may be best to use a configurator.)
Of course, if you have existing hardware, you can get actual load measurements from UPS readings, cabinet plug strips with IP-addressable metering or recording metering equipment, which gives the best data of all. But measurements must be made over time. Instantaneous readings may be at the low point of the day, so they can be misleading.
Data center power load capacity planning
When you have a reasonably accurate idea of your real load, you will need to predict the future. This is the most difficult part of all, and no one can give you a magic formula for doing it. You have to take a hard, objective look at your business, its IT growth history and the data you have on typical equipment loads, and do the best you can. Don't worry about being overly exact. You've already gotten rid of the biggest error factors, and you're going to ultimately round up the numbers anyway because UPSes and air conditioners come in "step size" increments. There are three main factors to consider.
- Predictable growth. (For example, when you're planning to move into a new data center or when you're going to vacate your existing one.)
- Anticipated growth for another two or three years. (You don't want upgrades just after moving in, and you may not actually vacate as soon as you expect.)
- Projected growth over the expected facility life. (This must take into account consolidation efforts, evolution to higher-density computing, energy-efficient hardware developments and the propensity of your organization to make big changes, like acquisitions or super computers.)
No matter how well you think you've done, there's a reality cross check you should still do for that long-term number. On a cabinet floor plan of your data center, assign groups of cabinets to each function (network patch, network switches, conventional servers, blade servers, disk storage, tape storage, mainframes, super computers, or other big box systems). It doesn't matter if the locations and organization on your plan are right, so long as the cabinet counts are realistic. Now assign each type of cabinet a maximum average load in watts or kilowatts based on the numbers developed previously, and add them up. Then ask two questions.
- How similar is this number to your long-term predictions?
- How does this number compare with your existing or planned cooling capacity? (You'll probably need an engineer or facilities person to help you with this one, particularly if you're limited by a maximum allowable capacity in your building.)
If your projections and cross check numbers are reasonably close (10% to 15%), you're in good shape. If not, something needs to be adjusted. Remember, the goal here is to make a realistic prediction, so if your average cabinet load number is way above your long-term estimates, it's probably higher than necessary. But if cooling capacity is your limit, you're either going to have to justify more (which we'll discuss in an upcoming article) or that's going to be your estimate number. There's no use powering more than you can cool.
So now you have your most important numbers: near-term future and long-term growth. By doing these realistically, you can right-size your UPS and cooling, and you can justify what you're asking for. Stay tuned for how to deal with both of those critical items.
About the author:
Robert McFarlane has spent more than 30 years in communications consulting, with experience in every segment of the industry. McFarlane was a pioneer in the field of building cabling design and a leading expert on trading floor and data center design. He is currently president of the Interport Financial Division of New York-based Shen Milsom & Wilke Inc. and a data center power expert.
What did you think of this feature? Write to SearchDataCenter.com's Matt Stansberry about your data center concerns at email@example.com.