In today's IT world, mobility is important. As a part of that, it is often necessary to move in-house workloads...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
to a colocation facility. These moves are major events for the admin team, occasionally ending in tears and recriminations.
Here's a checklist and best practices for successfully moving servers to a colocation facility -- with as little downtime as possible.
Server migration requires careful preparation
First, determine the power, space and cooling required for the servers you are moving. The ideal move would involve whole racks of gear with in-rack cabling intact, but many times the colocation facility has its own racks pre-wired to its backbone networks. If this is the case, servers may be distributed differently in the facility than they were on premises. If the server layout has changed, you'll need a new physical map for your gear.
Consider whether system downtime is allowed for the move. For servers dedicated to a specific workload, IT teams might be able to turn them off, move them and turn them back on without user impact. However, most organizations don't have many dedicated workload systems today.
Systems that need high uptime merit a more flexible approach. This is where virtualized servers can be a godsend. With virtualization, jobs can run on a subset of the systems, allowing for a partial shutdown during the move. Here, proper planning can open the downtime window to several days for servers, giving IT teams enough time to properly set up the network and perform on-site testing before restoring the units to service.
While moving servers off premises, remember that the new colocation facility is a shared environment. Create a firewalled zone in the new host's network and add all of the security tools you'll need to protect the new installation. This may mean new switches and routers, which you must set up prior to moving servers.
During the transition, the two virtualized sites will look like two segments of a private cloud. There should be connections between the sites within the network map and appropriate virtual local area network configurations. Plan networking incrementally when the gear is moved, and create and check scripts to build network structures.
Assign a move manager to each site as the go-to person to log and communicate all problems to appropriate admins and to ensure that issues are formally closed out. Since moves are hectic, details can easily be forgotten.
On move-in day
Shut down apps or move them to other virtual machines and then decommission servers. This is where a real desire for automated help sets in. Use a resource manager to handle more complex moves. At the new colocation facility, turn on resources and add them to the pool with a suitable software package.
Once the systems are powered down, it's time to package them for the move. For full rack moves, keep the internal cables and fiber-optic cables connected in place to avoid network cross-connections. It's not as simple as putting shrink-wrap around them, though; cables are heavy objects and tend to sway around easily. Tie or tape them down to avoid cracked connectors on motherboards.
With the cables secure -- and with a backup cable map on hand in case of unplanned events -- make sure each system is locked in place. Even during seamless moves, PCI Express cards can bounce out of their connectors and replaceable drive caddies can spring open. To avoid these issues, use an experienced computer-moving company.
Use an air-cushioned van and make sure the mover avoids any rough, back-country roads when moving servers. Once the mover delivers and installs the units -- whether as racks or as individual servers -- hook up power and cooling systems, as well as switches and routers.
A couple of admins should do independent visual checks to make sure power cabling is right, that nothing is loose and that cooling paths are lined up and clear from obstructions. Next, turn on and check a small group of systems at a time. Again, this is where automated configuration management software can save a lot of time, since beyond simple power-on, the server needs to build into its new virtual cluster.
As the new cluster comes up, consider whether to implement the next phase of the transition. Ideally, you can shift more of the workload to the new facility and then repeat the move process as many times as needed to get all the gear across. This can go much faster if you purchase a new block of servers. New gear can go into the new colocation facility and pass through acceptance testing before the physical move starts. This allows the next phases of the move to overlap, since the new gear provides a buffer for operations to continue.
A data center migration list alleviates risk
Colocation or cloud? Make your choice
How does colocation stack up to on premises?