Data Center.com

Data center management for geographically split data centers

By Robert Crawford

Business and regulatory requirements constantly push the limits of data center management and recovery. Twenty years ago, trucking tapes to a remote center for volume restores was good enough. Ten years ago, two data centers within synchronous I/O distance fit the bill. Now, with e-commerce being the primary workload driver, a good recovery plan involves data centers split over geographic distances with little or no recovery time.

The geographically split concept and data center management
Conceptually, the geographically split idea starts with two data centers, as shown below (Figure 1). Note that this concept may be extended to any number of sites.

Diagram of geographically split data centers. 

Figure 1: Diagram of geographically split data centers.

In Figure 1, two data centers are separated by a distance too great for synchronous disk I/O. The distance drives several requirements. First, each data center must have its own Direct Access Storage Device (DASD) farm to manage. Second, synchronous hardware replication will not work because of network latency. Lastly, the distance also means the logical partitions (LPARs) in each data center can’t be in the same Sysplex.

The network cloud bears an important role in data center management and as a switch between the two data centers. With proper internal plumbing, incoming requests can be routed to either data center site based on diverse criteria. In fact, with today’s browser-based applications, a user or customer may be switched between the data centers without interruption.

Because hardware replication is unavailable, data must be captured and applied at the logical database or access method level. There are several products available to help with this task. Some products look for updates by reading database or Virtual Storage Access Method (VSAM) logs for updates. Any interesting changes are hurled to the other data center via communication links using various transport protocols. At the receiving end, another piece of software issues the database or access method command to complete the remote update.

Configurations for geographically split data centers
Split data centers may be configured in several ways. The ones that come to mind are:

Hot-warm
In the enterprise, one data center is designated as the target of all network traffic. Updates in the primary data center are replicated to the secondary site, which receives and applies the changes to its local DASD farm. In the event of a primary data center failure, the secondary site comes online with minimum fuss.

Update-inquiry
In the update-inquiry scenario, one data center fields all updates while the other only allows inquiries. The update site sends changes to the read-only Sysplex in a timely fashion. If the update data center fails, the inquiry Sysplex assumes full responsibilities.

The network is crucial to this setup as it must be able to query message content to distinguish between inquiry and update transactions. The shop may also use the network for workload balancing so that each data center carries its share of the read-only traffic.

Update-update
This is the real deal. Each data center supports full updates to all data. Two-way replication flows over the communication links to keep the databases in synch. In the event of failure, the surviving data center takes on all incoming traffic.

Note that while both data centers do updates, data may be logically split. For example, the primary databases for customers living west of the Mississippi may be in "Data center A" with secondary, read-only data at "Data center B." Customers residing elsewhere would be the reverse. Ultimately, this means the network must be smart enough to know where a customer’s primary data resides.

Philosophical questions
You more thoughtful readers probably already have the willies thinking about this. Here’s some more food to add to your discomfort:

Detecting and acting upon perceived failures requires carefully crafted policies, mountains of automation and careful data center management. The good news is that as geographically split data centers become normal, the policy for handling these issues should become more easily transcribed as a set of rules instead of code.

ABOUT THE AUTHOR: For over 25 years, Robert Crawford has worked off and on as a CICS systems programmer. He is experienced in debugging and tuning applications and has written in COBOL, Assembler and C++ using VSAM, DLI and DB2.

31 Mar 2011

All Rights Reserved, Copyright 2000 - 2024, TechTarget | Read our Privacy Statement