Looking for something else?
There are many new and exciting data center building design, configuration and operation choices, but many of them involve trade-offs.
These newer standards and best practices have adherents and detractors, and potential detrimental effects or poor return on investment won't always be immediately obvious. Even some standards required by building codes are nonetheless controversial.
The specific concerns surrounding hot-aisle containment designs and safety warrant their own discussion.
No more economizer exemption
For a new data center, an expansion or just an equipment replacement, the building department and inspector may require a device on the cooling system known as an economizer, as outlined in the AHRAE 90.1 Energy Efficiency Standard. This is essentially a cooling tower that bypasses the normal system when it is cold outside.
It is not logistically possible to put an economizer on every existing building. Try doing this on most high rises, like the Empire State Building, and you'll run into a problem. Even without building restrictions, the cost, particularly of a retrofit, can far exceed the long-term energy savings.
The changeover process from conventional cooling to economizer can be problematic as well, which raises concerns for mission-critical facilities.
There was a data center exemption from ASHRAE SSPC 90.1 until 2010. Its removal became so controversial that a new standards committee, ASHRAE SPC 90.4, formed to develop a more realistic approach to achieving energy savings. That standard won't be out until at least 2015 and may not be adopted by building authorities for years.
There are some very limited exceptions in 90.1. Anyone embarking on a data center building project should investigate whether the latest version applies in their jurisdiction and how it affects their plans.
Most security breaches result from human carelessness or malfeasance.
Data center operators rarely enforce manual sign-in/sign-out procedures, so most buildings adopt card keys. However, relatively few operators require both swipe-in and swipe-out with keys, which would detect if two people try to share a key. You cannot force swipe outs in emergencies because it is illegal to trap someone in a room in a crisis, but the dual action increases security the vast majority of the time.
Some data centers added PIN code entry and/or biometric identification through fingerprint recognition or iris scanning to prevent misuse of key cards. A two-person authentication rule prevents people from accessing data center assets unaccompanied, but this may prove impractical. Cameras monitoring each aisle detect intruders, but the trespassers are already inside your facility. Adding telemetered cipher locks to cabinet doors could be the last line of defense in ultra-secure facilities, particularly multi-tenant colocation centers.
The most controversial data center security method is the man trap. This design commonly frames a vestibule with two doors; the second door can't open until the first one is closed. A people-counting system ensures that only one person is in the vestibule at a time. Others use special revolving doors to accomplish the same goal. Since large equipment and people have to get in and out of the data center, the traps must be very big or a second, less-secure entry must exist. Fire codes require the doors to immediately unlock in an emergency and almost always require a second means of emergency egress.
Switched outlet power strips
Wiremold plug strips have been replaced by ever more sophisticated power distribution units (PDUs) to reliably deliver power to each device in the rack or cabinet. Today's PDUs also measure current draws, monitor temperature and humidity, and even control cipher locks on cabinet doors. The newest feature individually and remotely switches and monitors each receptacle on each power strip. Is this necessary or even useful? It costs more, and the value of these individual outlet switches depends on your data center.
Individual outlet switching lets a data center remotely shut down and reboot servers in lights out operations and prevent unauthorized equipment from running by turning off all unused outlets. However, its Internet connection means there is always the potential for intruders to shut down critical operations. If you can't use a private, internal network, the risk may not be worth the reward.
Individual receptacle power draw monitoring could help with phase-balancing, but most new compute hardware provides much more information from internal sensors. The volume of data generated by all the outlets in a large data center would be overwhelming and not very useful without a robust data center infrastructure management system.
Preventive maintenance is controversial because things can go wrong when technicians work on equipment. But does that mean we should not do it?
All mechanical and electrical equipment will eventually break, so if it's going to quit, why not be prepared? Maintenance preparation is particularly critical in data centers that don't have redundant systems, such as Uptime Tier I and II facilities. But even in Tier III or IV facilities, maintenance should occur when an accident would cause the least damage. Schedule it around IT projects, and have knowledgeable in-house staff supervise, particularly with a new service person. Have facilities staff available to immediately deal with problems at all levels, and put a backup/recovery plan in place.
In addition to contract maintenance on the uninterruptable power supplies and air conditioners, preventive work should include cleaning filters on servers and switches. Yearly infrared (IR) scanning detects any hot electrical connections so they can be tightened, preventing a fire and/or catastrophic failure.
Unfortunately, few electrical enclosures are equipped with IR scanning windows so, depending on power level, the technician may need to put on a clumsy Arc Flash Hazard suit before opening access panels. Whenever possible, install electrical equipment with IR scanning windows.
Ceilings and observation windows
There was a time when every data center had windows looking from the operations center into the computer room. But now, most operations are conducted elsewhere. Even when the building includes adjacent ops centers, windows either look at the back of a row of cabinets or down a few rows. Unless they span a wall, windows won't help you keep an eye on visitors. Video cameras provide a far better view of everything important, and selectively show visitors what's inside, if desired.
Ceilings were once rare in data centers -- they kept heat trapped at a low level and required additional fire suppression systems. Ceilings now create air return plenums that improve the delivery of hot air back to perimeter air conditioners. They also limit the room's volume, helping gaseous fire suppression systems. But when used for return air, both the ceiling materials and everything in the plenum space should be constructed to avoid particulates flaking off into the air stream. In nonraised floor environments, a ceiling limits the vertical space available for all overhead infrastructure, so examine the structure height closely. In all cases, operational considerations should trump aesthetic ones.
About the author:
Robert McFarlane is a principal in charge of data center design at Shen Milsom and Wilke LLC, with more than 35 years of experience. An expert in data center power and cooling, he helped pioneer building cable design and is a corresponding member of ASHRAE TC9.9. McFarlane also teaches at Marist College's Institute for Data Center Professionals.