Power trends are multitudinous and sometimes confusing for data center designers and operators. Make sure you understand the value of a power control or monitoring product before plugging into the fad.
Here's what we'll cover in this article about data center power design:
Despite their significant cost, intelligent power strips are widely implemented and coveted in new and existing data centers. Intelligent power strips are more robust than the simple plug strips we used for years and offer multiple receptacle configurations, voltage standards and power capacities. But their greatest single advantage is power measurement.
Without these sophisticated power strips, it's difficult to know how much power each cabinet draws, an important factor in energy management and phase balance. Strips can remotely read and switch power on each receptacle, which enables remote door access control or temperature and humidity measurements in each cabinet.
Intelligent power strips boast many "features" of questionable or marginal value in most data centers and should be considered for specific cabinets, not universally. Selecting the appropriate unit from the thousands now on the market, however, can be daunting. It seems that some of these features exist only to promote the manufacturer, not to address real needs. Hopefully, concentration will shift to logical outlet arrangements, ease of obtaining data and high-capacity distribution.
When it comes to data center power design in the U.S., we operate most equipment at 120 V, and have for decades. We now know that it is more energy efficient to run computing equipment at higher voltage. Consequently, more cabinets today draw 208 V, which is basically the norm for branch circuit distribution in new and renovated data centers. Operating at 208 V offers energy efficiency and requires fewer wires of smaller size to deliver the higher levels of power required.
Europe has always operated between 220 V and 240 V, deriving power from nominal 400 V service (380 V to 415 V depending on the country). And since virtually all computing hardware can run on anything from 120 V to 240 V, and usually auto-senses the incoming voltage, Europe has been realizing higher efficiency for years.
Using 240 V slightly increases efficiency over 208 V to cabinets, but its real payoff is in phase balance. With 208 V derived from two of the three phases of the U.S. power system, phase balancing can be tricky. Every time load moves, one of the two phase loads changes and the other remains the same. Europe's 240 V power is configured just like the 120 V system: each circuit comes from just one phase of the 415 V, three-phase power system, so moving loads to balance the phases is simple, logical and straightforward. We gain additional efficiency from phase balancing and derive maximum capacity from our uninterruptable power systems (UPSes).
There is UL-listed electrical equipment available in the U.S. for what is generally called 400 V service. While several data centers are using it, many facilities personnel tend to regard 400 V service as strange and unknown. So 400 V distribution is something that should gain much wider adoption in data centers than it actually will.
Alternating versus direct current remains one of the biggest debates in the power sector of the data center business. The debate really started with a DC power demonstration at Lawrence Berkeley National Laboratory in 2006, which reported efficiency increases as high as 28%. The alternating advocates -- including just about everyone that manufactures and markets an AC UPS -- immediately challenged this number. Berkeley also reported more normal gains of 5% to 7%, which AC UPS manufacturers then claimed they could match.
There was also great concern about IT personnel handling high-voltage DC power and the fact that there were no industry standards and no UL-listed connectors for the voltage. Everyone knew that DC power had to have efficiency advantages because both the UPS inverter and the AC/DC rectifier in the server were eliminated, along with their power losses. But DC distribution, as Edison and Westinghouse hashed out long ago, still has loading considerations that AC does not.
DC power adoption also languishes due to the scarcity of data processing equipment designed for DC operation. That has started to change. The industry has settled on 380 V DC, which is the same as the internal DC voltage of most of the equipment power supplies. Standard connectors are now available as well, along with many more computing devices that have a DC power option. At least some of the highest energy-efficient data centers have adopted DC power; its adoption will be about the same as that of 400 V AC service. There will be those who understand and exploit DC, but most will stick with AC power because it is well understood and takes no special skills.
More data center trends
These aren't the only fads in data centers today. Check out the rest of this series, which covers data center trends in cooling, power, design and performance.
One of the biggest buzzwords in the industry today is data center infrastructure management (DCIM); it's one of the most important, least defined and over-hyped innovations in the modern data center. DCIM is still evolving and comes in all types, sizes and flavors, but it's vital for any facility that wants operational and energy efficiency. It could become universal, if energy efficiency laws come into play.
DCIM products may monitor only rack power and temperature or accumulate data on every detail of the data center operation and keep track of inventory, device locations and software configurations as well. As with any record-keeping tool, DCIM is only as good as the people maintaining it and reviewing the information. Buying a fancy DCIM tool with no one to keep it up or putting off improvements based on the data provided isn't benefiting anyone. The best advice is to get something that can start basic and grow.
Power usage effectiveness (PUE) was definitely over-hyped when first announced by The Green Grid. In fact, reporting the lowest PUE number was getting to be a battle of the goliaths, with the major players claiming numbers many thought unrealistically low. But The Green Grid has redefined PUE and has established four measurement categories that force adopters to gather data as prescribed and report which methodology you used. With the super-hype gone, PUE is much closer to what was really intended -- namely, a method of tracking your own energy efficiency in your own data center and seeing how much improvement you can make with each step you take.
PUE is now the most accepted method to track efficiency and goes right along with DCIM; you can't track PUE without the right data. As data centers monitor energy use to improve operations, instead of for bragging rights, PUE will appear in more organizations.
Since data center power design fads are widely adopted whether needed or not, battery monitoring doesn't really qualify, even though many UPS installations now include some type of monitoring system. But a good battery monitor can save many times its sticker price by identifying weak cells early so they can be replaced without tearing out the entire battery string.
Each manufacturer will tell you that its method of testing the cells is better for a whole list of technical reasons, but all the established systems work and are far better than nothing. Wholesale battery replacement is expensive. It must be done every several years with VRLA or sealed batteries, which are now the most commonly used in data centers, but delaying that cost even a few extra months is well worth having a good monitor in place. Even more important is knowing that your UPS will actually maintain your systems if a power interruption occurs. There is no price tag on that.
About the author:
Robert McFarlane is a principal in charge of data center design at Shen Milsom and Wilke LLC, with more than 35 years of experience. An expert in data center power and cooling, he helped pioneer building cable design and is a corresponding member of ASHRAE TC9.9. McFarlane also teaches at Marist College's Institute for Data Center Professionals.
This was first published in October 2013