One of the most deceptive designations of all time is "uninterruptible power supply." The false sense of security...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
this name implies has trapped many uninitiated. Why? Just two reasons: poor design configuration by the engineer or sales rep and lack of understanding on your part.
When everything works right, the UPS really does live up to its name. But when it doesn't, it can be a solid barrier between your equipment and perfectly functioning building or generator power. Try explaining that one to management! The lights are on, but your data center is down because that expensive UPS you wanted is out of commission. Have your resume ready!
In simplest terms, a UPS converts incoming AC power to DC through a rectifier, then back to AC through an inverter. The DC power also keeps the batteries charged. If incoming power fails, the batteries start discharging into the inverter to keep power flowing.
There are two basic types of UPS. (We will not try to deal with flywheels or other more esoteric devices in this limited space.)
"Full-time" UPS, in which the equipment continuously powered from the re-created alternating current from the UPS. This is also known as a "double conversion" UPS.
- "Line-interactive" UPS, in which the equipment is actually running from normal building power, with some filtering, until power fails, whereupon the load is quickly switched to the actual UPS and the batteries start to drain. The prices of these units are kept down by making the rectifier large enough to charge the batteries, but not big enough for the full equipment load.
With full-time UPS, the equipment never sees the outage -- not even a ripple. Line-interactive UPS takes two or three power cycles (about 1/30 second) to switch to battery support -- a time short enough for equipment power supplies to maintain the computers. They "see" the interruption, but it rarely affects them.
Most of what follows assumes full-time UPS designs, since that is what is generally used in full-blown data centers. We must also caution about the use of line-interactive UPS with generators. Private generator power is not as well-stabilized as commercial power feeds. It's a lot better than darkness, but it can fool line-interactive UPS units into thinking power is being randomly restored and interrupted, causing the UPS to keep switching back and forth. As will be seen from what follows, this can ruin batteries, as well as the UPS, and the multiple switchovers can also be more than your hardware can tolerate.
So let's look at what can go wrong, and actually make the UPS "interruptible."
Batteries: Most UPS's today use "sealed-cell" batteries, properly known as VRLA (valve-regulated lead acid). These can be used in a normal, occupied environment because they don't emit explosive hydrogen gas like flooded lead acid "wet cells" do. Any battery can fail, but VRLA cells have a much shorter service life: a 5-10-year warranty as opposed to a 20-25-year one for wet cells. But that's if they're not used. If you're in an area that experiences multiple, short duration power losses, VRLA batteries have been known to fail in as little as a year. And, of course, failures occur most often when they're put under load; in other words, when there's a power failure and they're most needed. Since battery cells are connected in series, like those little Christmas lights, if one cell fails to "open" (the usual condition), battery power stops and your UPS is dead -- immediately! Remedy? Dual or multiple battery strings and either automatic or regular battery testing.
Bypass: Virtually all UPSs have internal maintenance bypass. This allows a technician to work on the insides safely. It's also supposed to click in automatically when the batteries run out, go bad or some other UPS failure occurs. But most UPSs have components -- usually input or output transformers -- that are outside the "bypass" chain. These things don't fail often, but when they do, you're dead in the water. Power is coming into your building, but it can't get past your UPS. Very embarrassing. Hard to explain. In one case, we saw a transformer literally go up in flames and fry not only itself, but the UPS innards as well. We've also seen instances where the internal bypass failed and there was no way to manually operate it.
There are only three ways around these situations:
- Run and hide. (Not a good career choice.)
- Get an electrician to wire around the UPS. (Time-consuming.)
- Install "full wrap-around bypass." (Initially higher cost, but safer.)
The latter is always our choice, but we often have to fight for it against "statistical failure" data and "value engineering" pressures. If you ever experience one of these failures first-hand, statistics become meaningless and the "savings value" plummets to zero. We would never advise a client to install a UPS without full wrap-around bypass.
Redundancy: This is a large topic and more complex than we can cover thoroughly in this forum. Suffice it to say that there are many approaches to UPS redundancy, all with differing levels of protection. Maximum reliability is achieved with a fully redundant "2N" design, with each UPS running at less than 50% load and static transfer switches to shift load within a few power cycles of a module failure. This is obviously also the most expensive and is not justified for everyone. Every step below this carries an increased risk -- sometimes very small and sometimes significant -- and the specifics of equipment selection and connection can make major differences in even the "ultimate design" performance. For example, with any redundant design, one of the most important things to verify is how the UPS responds to an instantaneous doubling of power draw ("step function"), since that is exactly what will happen if a module fails. With primary-side static transfer switches, it's important to look at how current rise is controlled, since the sudden current change created by switching can cause something called "saturation" in downstream transformers, resulting in unacceptable waveform distortion. There are many dozens more things to consider in arriving at the most realistic, cost-justifiable UPS for your needs.
Air conditioning and battery duration: Let's say it bluntly. There's no value in having four hours of battery if your hardware (including your UPS) is going to go into thermal shutdown in 10 minutes due to lack of air conditioning. Unless you have a backup generator, and your total air conditioning plant is properly connected to it and has been thoroughly tested in a real "pull the plug" commissioning process, most everything you have is going to be down in less than 30 minutes anyway. Big blade centers may make it only a minute or two without air, and some of the newest hardware can be down in seconds. If you have a generator, 15-30 minutes of battery should be more than enough. And if it doesn't start, it's still probably enough since you won't have air conditioning without it. The only exception may be IDF rooms with small stackable network switches for VoIP phones. If the heat rise in the room is slow and you can keep things cool by opening the door, and if you can keep the central phone and network switches running by shutting everything else down, then as much as four hours of UPS might be considered for those devices alone in order to keep the phones working as long as possible.
In short, UPS is expensive. Question everything. Examine each potential failure scenario, and evaluate the cost of remedy against the potential cost to your business. Ask each vendor what to ask their competitors and insist on thorough explanations, from both the sales reps and your engineer. If they seem unsure, or if it sounds like doubletalk or obfuscation, dig deeper. You don't need to be an engineer to understand the operational tradeoffs. There's too much money and business risk involved to take anyone's word at face value.
This column originally appeared on TechTarget's Expert Answer Center as a post in Robert McFarlane's blog. Robert served as the on-demand expert on the Expert Answer Center for two weeks in October to November 2005, during which he was available to quickly answer questions on data center design as well as to write daily blog entries. Keep an eye on the Expert Answer Center for topics that could help your IT shop.