You're going to be in your current data center for at least another five years -- maybe as long as 10.
The racks are getting crowded, and the business is pushing for IT to add services that require more powerful servers and bigger networks. Budgets are tight and you'll need to justify any data center improvements you request, particularly if management is planning a site migration or outsourcing initiative and wants the current facility to get by with as little as possible.
What's really worth doing to gain space, power and cooling efficiency in an existing facility for the short term? Which things will make a difference, and which may do nothing or even make problems worse?
You can make improvements in the data center without wasting money from the IT or facility budget. Adhere to the basics, examine upgrade options carefully before implementation and get the maximum out of what you have before adding more.
Space, the final frontier
The data center's square footage is unlikely to change, so be as efficient as possible with what you have.
Get rid of anything that isn't in use. Most data centers nearing end-of-life have comatose servers turned on in racks. No one knows what they do, but the ops team fears shutting them down. Turn them off and see who screams -- probably no one. If something useful is still on a zombie server, migrate it to another device. Clear out these systems to help with power and cooling problems as well.
Migrating workloads is easier with virtualization, but that isn't guaranteed to improve the existing data center short term. Virtualization is a major effort that can require expensive new hardware, and may add to power and cooling problems. If you already virtualize, continue to consolidate as much as is feasible within the constraints of the supporting facility infrastructure.
Most older data centers become store rooms as well as operational facilities. The dirt, dust and tripping hazards created by storage crates and boxes are bad environmentally, and they also waste valuable, highly specialized space. Get rid of the junk. Pare down the items in any actual storage rooms and move worthwhile storage there. Data center floor space is priceless, and IT organizations would do well to show management the cost difference between closets and cold aisles.
Rearrange cabinets and equipment to make better use of the available space, unless it requires expensive rewiring or a difficult move process.
Repair a raised floor if there are missing tiles and old cutouts. Floor repairs add usable space and help solve cooling problems -- sealing holes that spew wasted air. You can live a few years with ugly tiles, but usable space and cooling are necessities.
Feel more powerful
Power is expensive. Turning off old servers helps, but new, high-performance hardware can demand more power than shutdowns save. New feeders, uninterruptible power supplies (UPS) and distribution infrastructure are an expensive last resort when the return on investment will be so short.
Look for spare capacity in the building. Segment the data center to make use of separate power feeds and UPS services; the appropriated power source might not be on a generator. Rearrange IT hardware to make the best of the situation -- highest-priority systems belong on your most reliable power service. Arrange and mark cabinets to clearly show what's on which feeder. Consider an external connection point for a rental generator to keep even lower-priority systems highly available.
When your time runs out
After extensive data center improvements, your facility runs an extra 10 years. But what happens next?
Everything in your data center comes from three-phase power, whether you actually run three-phase to cabinets or not. Use metering or expert help to determine whether IT loads are balanced across the three phases. If you're more than 10% out of balance at any check point -- a very common situation -- then you're squandering available power. Identify imbalances and reallocate equipment onto circuits for nearly identical loading on all three power phases.
Invest in monitoring devices, such as intelligent power distribution units (manufacturers call these ePDUs, iPDUs, CDUs or similar names), and a basic data center infrastructure monitoring and management system. These devices help maintain phase balance without huge capital outlay for data center improvements.
Data center cooling is often the most challenging improvement to keep a facility running.
Escaped air through openings, in and between cabinets, is still a common problem. Be diligent about blocking holes and you'll be rewarded with better cooling for free.
Whether you use a raised floor or overhead vents, improve cooling by rebalancing the air flow. A good data center audit, including on-site measurements and a computational fluid dynamics (CFD) model, may be the best investment for short-term updates. It's a waste of money to install additional computer room air conditioner (CRAC) units when they aren't needed, or to put a CRAC in the wrong place. Adding an air conditioner where it doesn't belong actually degrades cooling. A CFD model may even show that you just need to move hardware to make use of excess cooling capacity.
Also audit the cooling temperature needed for the current IT load; you can get more cooling capacity from CRACs by increasing air temperatures per the newest ASHRAE guidelines. This should not be done cavalierly, so consider the starting temp and hardware in use.
Under-floor cables degrade air flow. It's no fun cleaning out old cables and straightening up the serviceable ones, but it's a lot less expensive than a major cooling upgrade.
Add containment for another cost-effective step to improve cooling efficiency. Cool aisle containment is usually the easiest for retrofitting a facility, even though most studies show that hot aisle containment is fundamentally simpler to manage and slightly more energy efficient. The big obstacle is fire protection, detailed in NFPA 75 and 76, so keep air barriers from isolating sprinklers and gas heads. Consult a fire protection engineer when implementing any containment scheme. Even if fire heads are not optimally situated, well-designed partial containment is effective and safe. Use the data center CFD model to map out potential containment schemes.
If you must bring in additional cooling, localized approaches can supplement facility CRACs. In-row coolers (IRC) take up floor space, but provide cooling exactly where it is needed. Concentrating higher-density hardware in one area of the data center, along with IRCs and containment, could save money and improve cooling over an additional perimeter CRAC. Rear door heat exchangers handle very high-density cabinets with minimal floor space, but they may require more piping and control devices than IRCs. Immersion cooling could solve major heat problems on equipment.
Equipped for success
When making short-term data center improvements, look at equipment leases. Work with vendors so that new leases and lease extensions expire when you move. Even better, negotiate for the vendor to install new equipment in the new facility to overlap with the existing deployment, so you can configure and test systems before moving.
Vendors should be accommodating since the lease is an assurance that you will continue to use their products.
About the author:
Robert McFarlane is a principal in charge of data center design for the international consulting firm Shen Milsom and Wilke LLC. McFarlane has spent more than 35 years in communications consulting, has experience in every segment of the data center industry and was a pioneer in developing the field of building cable design. McFarlane is one of several regular contributors to SearchDataCenter's Advisory Board, a collection of experts working in a variety of roles across the IT industry.
Negotiate IT support contracts with your vendors
What you need to know before signing an SLA
Protect your data center with an IT vendor contract
5 strategies to upgrade your data center