Telecom rooms have become something of an extension of the data center, or at least an extension of the network...
that originates there.
Nearly everything -- telephones, security, clocks, video, wired and wireless networks and even televisions -- rely on the network to support IP-enabled functions.
Much of what is in telecom rooms is just as mission-critical as what resides in the data center. In a hospital for example, the nurse call, patient order, drug dispensing and patient record systems all depend on the network.
Uninterruptible power supplies (UPS) are mandatory for this equipment, but what kind of UPS? Unlike a data center, telecom rooms are scattered on every floor throughout the building, so it is costly to extend power wiring from a centralized data center UPS. In a large building, maintaining enough local, rack-mount UPSes -- although often redundant -- would end up being a nightmare.
Don't compare rack-mount to a centralized UPS strictly on cost. The best approach to telecom UPS depends entirely on the building design, reliability and maintenance responsibilities.
One central UPS
The central UPS must be large enough to handle the telecom room loads, as well as those in the data center. Those telecom room loads can add up: Power over Ethernet runs phones, wireless access points and even clocks and digital displays. All that power has to go through the network switch, in addition to the power that runs the switch and anything else in the room.
Central UPS systems do not need to be compact to fit into a standard rack. They accommodate large components necessary for maximum reliability, with physical separation to minimize heat exposure.
These systems also generally tolerate overload well. They respond excellently to sudden load changes, such as the inrush current that occurs when a redundant UPS or module fails or is taken out of service. That is not to say that smaller, rack-mount UPS units are unreliable, but central systems are more robust long term. And a central UPS is always monitored -- if anything goes wrong, alarms immediately alert the appropriate staff so they can address the problem.
Under maintenance contracts, factory-authorized technicians thoroughly check and test the central UPS at regular intervals. They replace degraded components before the pieces fail. As a result, total, sudden failures of large UPS systems are extremely rare.
It's normal to upsize a UPS by 20% for safety and headroom. If loads in 20 telecom rooms vary between 5 kW and 8 kW, each room will likely have a 10 kW UPS. If we assume the loads are evenly divided among the rooms by 5 kW, 6 kW, 7 kW and 8 kW each, the total load would be 130 kW. In this example, upsizing room-by-room means 200 kW installed total capacity, or 54% beyond average need. Each kW of that capacity costs money; dual UPS doubles the extraneous capacity. The facilities group could specify four different systems that are each closer to actual need, but the lack of uniformity among rooms would make maintenance challenging. By contrast, a central UPS in this scenario would be upsized to 150 kW, possibly saving significant costs.
Problems with a central UPS
There are plenty of downsides to centralizing the UPS for telecom and data center equipment. Normal building power is delivered to each floor in much the same way as telecom services: Main feeders go to local electrical rooms just as fiber runs go to telecom rooms. Transformers reduce the power for distribution while network switches split the trunk into lower-speed service drops to each office, workstation and device.
Powering telecom rooms from a central UPS requires either a second high-voltage distribution network, or individual low-voltage runs from the UPS to each telecom room, but both are expensive. A second distribution network takes additional switchgear, transformers and building space. Long electrical runs to each telecom room are costly and lossy. Long, low-voltage runs require larger wire gauges to curb losses. And all those conduits, wires and circuit breaker panels are in addition to the normal building power. It's more cost-effective to run larger circuits to the telecom rooms on each floor to power local UPSes.
A small remote circuit breaker panel is required in each telecom room, and they're not likely to be monitored or IP-addressable. So if a circuit breaker trips, the only indication is a switch failure; the cause won't be known until someone visits the room and assesses the problem.
Remote switching UPS receptacles in telecom rooms are questionable for security reasons. But without remote control, anyone with access can plug in unauthorized loads, pushing capacity until a new large data center load topples the over-stressed UPS and triggers a shut down. This becomes even more likely with three-phase circuits and poor phase balance control. The UPS may still have plenty of capacity, but overloading a single phase will still bring it down. It's hard to monitor phase balance with many outlying locations involved.
Electricians may work on the system at any time. A technician error has the potential to shut down the central UPS. Intervening circuit breakers should isolate the problem and prevent faults, unless they were incorrectly sized or calibrated.
When a UPS lives in each telecom room, overload and technician work only affect the local UPS and equipment. Monitoring allows IT operations staff to correct a loss of UPS input power, or fix a condition causing the UPS to switch to bypass mode before it shuts down the telecom room.
Despite the probable over-sizing of the individual UPSes, and the loss of economy of scale, multiple smaller UPSes can cost less than adding major electrical infrastructure to upgrade the data center UPS. This depends on the number of telecom rooms, how widespread they are, and the costs of individual UPS unitsversus upsizing the central UPS.
Service calls are isolated to a single telecom room with rack-mount UPSes. With only a central UPS, maintenance and service activities put the system in bypass mode and can impose a power risk to the data center.
The downsides of local UPS
IT, rather than facilities, is responsible for individual, rack-mounted UPS units, including capital and maintenance costs. This fact might motivate the inclusion of local UPS designs in new construction budgets.
IT is also responsible for monitoring the many UPS systems, changing out failures and doing battery replacements on each unit every three years or so. To monitor each system, IT teams need network ports in each telecom room and a robust monitoring and trend analysis software. With monitored rack power strips in the same room, expensive switch ports will add up.
Smaller UPSes, particularly those that run at a low percentage of capacity, will fall behind large UPS counterparts when energy efficiency is a concern. As with the overage analysis, waste adds up over multiple rooms and skyrockets with redundancy.
In most installations, the rack plug strips -- power distribution units (PDUs) -- are plugged into receptacles on the rack-mounted UPS. A technicality in the National Electric Code (NEC or NFPA-70) requires that the UPS and PDU be UL-listed as a unit in this configuration. Rarely are UPSes and PDUs purchased and installed that meet this requirement, and even more rarely does an inspector raise a concern about this detail as long as both components are individually UL-listed. But the possibility exists that an installation could be cited as a code violation and shut down.