Data center metrics and standards guide
A comprehensive collection of articles, videos and more, hand-picked by our editors
Power is an expensive commodity that's getting even more costly, so it makes fiscal and environmental sense to...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
conserve it in data centers.
Even the most efficient Energy Star-rated servers require power. While you will never eliminate data center power consumption, you can increase energy efficiency, reduce computing hardware and cooling energy needs, and get your company a green reputation.
In recent years, major cooling improvements enabled significant energy saving, and several have been widely adopted. Equally important strides have been made in electrical design, but most are not as well-known, so adoption has been slower. Data center operators achieve the biggest energy-use reductions by taking advantage of both cooling and power improvements.
As new systems and design techniques have come into play, new best practices have also come to the forefront. By following best practices, purchasing energy-efficient hardware, installing or upgrading to the newest power and cooling equipment, and properly managing all the elements in the data center, most organizations reduce their energy usage and improve their bottom line.
Measure data center PUE
The Green Grid, an organization that helps companies save energy and exercise environmental responsibility, developed two metrics that measure and track the effectiveness of energy-saving efforts: power usage effectiveness (PUE) and data center infrastructure efficiency (DCIE). Both are the ratios between total power used by everything in the data center and the power used by just the computing equipment alone. Total power includes computers, air conditioners and central mechanical equipment, uninterruptible power supply (UPS) systems and wiring losses, lighting and even the offices that directly support the data center.
Data center PUE is total power divided by computing power. The highest efficiency data centers run at PUEs of 1.2 to 1.5. The majority of data centers are still in the range of 2.5 to 3.5. DCIE is the inverse ratio, expressed as a percentage. PUE is the more accepted efficiency measurement.
The data required to calculate PUE, or DCIE, has been lacking in most operations. Even today, with a high level of energy consciousness, most data center managers have no idea how much power their facilities use: either the data center is on the same meter as the rest of the building, the bill goes to the accounting department, or both. This misinformation makes it difficult to know what your energy-saving efforts accomplish.
Monitoring power usage is an important step in improving efficiency. This requires more sophisticated monitoring than just total power usage, and has led to the development of a range of data center infrastructure management products. These products range from applications dedicated to monitoring individual power strips to systems that watch over every piece of electrical and mechanical equipment and computing hardware, maintain inventory records, provide change control and document cabling. Knowing when a server is no longer used, for example, makes it possible to bring energy demand to the absolute minimum.
Keep cooling cool
Once you know the data center's PUE, you can improve energy efficiency. Cooling systems traditionally are the least efficient parts of the data center infrastructure, second only to the computing equipment.
Data centers should follow cooling basics: hot-aisle/cool-aisle cabinet configuration, blanking panels filling unused rack spaces, covers on holes in raised-floor air plenums and regular old-cable removal to improve airflow.
New techniques, like containment, do more. Containing the cool inlet air and the hot exhaust from equipment improves cooling efficiency, increases air-conditioner capacity and saves energy -- all with simple barriers around aisles. But when you adopt a containment approach, be aware of the requirements of the newest National Fire Protection Association's standards; you can't block sprinkler or gas system heads.
Increasing equipment inlet temperature to between 75 degrees Fahrenheit and 80 degrees Fahrenheit (24 degrees Celsius to 27 degrees Celsius), per the American Society of Heating, Refrigeration and Air Conditioning Engineers TC 9.9 guidelines, also saves an enormous amount of power. This is more effective in concert with containment.
Another approach to energy efficiency is the use of variable frequency drive (VFD) on cooling equipment. The drive frequency is set by appropriate temperature, pressure, velocity and humidity sensors in the data center. That input dictates the speeds of fans, compressors, pumps, chillers and cooling towers to match actual need. Energy use drops dramatically when motor speeds decline; combining VFD with electronically commutated motors can save even more. Further, with VFD, you can run redundant cooling continuously with lower total power usage.
Better computing through electricity
Changes to a data center's power infrastructure reduce energy consumption.
Computer power supplies run more efficiently at higher voltages (e.g., at 208 volts rather than 120 V), which even most legacy hardware senses automatically. Running three-phase circuits to cabinets, particularly high-density ones, is usually the most efficient way to deliver 208 V power, and also provides maximum capacity for load growth. These 208 V circuits require two-phase wires, but running all phases of a three-phase system gets you three 208 V circuits with only one additional conductor. If you still need a few 120 V circuits, a fourth neutral conductor can be installed. In existing data centers, wire new cabinets for 208 V and upgrade remaining cabinets as necessary.
Even higher efficiency, plus simplified operation, can be gained by using European 415 V/240 V power (commonly called 400 V). This standard delivers 240 V to the equipment on four wires (the same three-phase wires plus a neutral wire as for 208/120 V systems). Computer power supplies run even more efficiently on 240 V than on 208 V, with the added advantage of being much easier to phase balance with loads moving on one wire at a time.
A scalable UPS infrastructure also improves data center PUE. It provides only as much power as needed, easily augmented as load increases. Scalable UPS systems let you add capacity either by plugging in additional modules or by unlocking additional built-in capacity via software control. By matching UPS capacity to actual load, you are always running in the most efficient part of the UPS load curve. This is particularly important in 2N redundant designs.
Another type of backup power system is the flywheel UPS. A heavy, constantly rotating wheel sits inside and stores kinetic energy, rather than traditional batteries that store electrical energy. When utility power fails, the flywheel generates electricity until it flywheel slows down, which usually takes 20 to 45 seconds. Flywheels are sometimes used in conjunction with batteries to minimize battery depletion. This particularly suits locations where short power interruptions are common, whether due to remoteness or under-developed utility infrastructure.
Eco-mode UPSes are also gaining traction. They advertise 98% to 99% efficiency, regardless of load level. Unlike the more common double-conversion UPS -- which converts incoming alternating current (AC) utility power to direct current (DC) and back to AC -- an eco-mode UPS runs the computing equipment on filtered utility power most of the time. When the incoming utility service fails, or the voltage dips below an acceptable level (like in a brownout), eco-mode UPS systems switch to full double-conversion mode so rapidly that the computing equipment misses the power problem.
Not everyone is comfortable with eco-mode UPS due to concerns about how well it filters incoming power and what level of power stability high-performance computing equipment needs to avoid glitches, data loss or internal damage. The power supplies in modern servers, though, provide their own isolation and several seconds of ride-through power, so we're likely to see more eco-mode systems adoption. Any data center power system should include surge protection, particularly on eco-mode UPSes that bypass double conversion systems. Surge protection devices won't improve energy efficiency, but they will enhance safety.
Data centers increasingly accept DC power over AC to reduce energy usage. AC provides the best means of moving power over long distances, but all computers must convert AC power to DC to run the electronic components. If we convert incoming building power to 380 V DC, the de facto standard voltage, it eliminates the UPS inverter, as well as most of the server power supply. Eliminating two conversions should increase overall efficiency, but the actual gain is still debatable. DC requires more attention to circuit loading than AC, so it's not for everyone.
About the author:
Robert McFarlane is a principal in charge of data center design at Shen Milsom and Wilke LLC, with more than 35 years of experience. An expert in data center power and cooling, he helped pioneer building cable design and is a corresponding member of ASHRAE TC9.9. McFarlane also teaches at Marist College's Institute for Data Center Professionals.