How much remote data storage, network and server hardware is enough to enable redundancy between two data centers?
The simplest answer is "It depends;" hardware requirements hinge on your ultimate goals for disaster recovery.
Redundant data copies at a remote data center let organizations quickly recover damaged or malfunctioning workloads or even launch workloads on remote systems for business continuity. Successful data redundancy requires a careful consideration of remote storage and networking and server hardware deployment needs, based on the DR strategy.
Remote data storage is usually the most pressing issue for redundancy planning. A remote site requires enough storage to retain all data deemed important enough to be replicated or backed up to a second site. The remote storage deployment depends on what data needs to be stored, how long the data needs to be stored and how much the data can be mitigated or compressed.
There are options for how to protect important data. Redundancy might mean occasional or frequent virtual machine (VM) snapshots, traditional backup approaches or some mix of strategies and retention times. Storage technologies built into the subsystem -- thin provisioning and data duplication -- will help mitigate remote storage system requirements.
Other data redundancy concerns
A business must be able to move data copies between sites, so networking and connectivity are also important considerations. Full remote backups can overwhelm network bandwidth if administrators don't manage bandwidth use and reduce the data volume with technologies such as differential or incremental snapshots.
In some cases, a comprehensive remote data protection strategy may require networking upgrades or architectural changes at either or both data centers. Data cannot be protected or synchronized when network connectivity is disrupted, so consider the business impact of connectivity problems. Some workloads may simply be too important to risk in real-time replication.
If you only intend to protect and store data in a cold redundancy center, there are few -- if any -- server requirements. Servers are necessary if your disaster recovery plan includes operating workloads at the remote data center. For this warm or hot redundancy, servers should be capable of running applications and offer enough resources to support the protected data. Virtualization abstracts workloads from the underlying server hardware, allowing workloads like VM snapshots to run on an array of server hardware without traditional hardware duplication at the redundant site.
Dig deeper on Data Center Disaster Recovery
Related Q&A from Stephen J. Bigelow
We use an IBM System z mainframe and it's time to update some aging tape drives. How big of a difference is there between the tape storage products ...continue reading
Tape, disk, virtual disk and other storage media all work with my mainframe, but how do I decide which is the best option for storing data?continue reading
I want to share a tape drive between the IBM System z mainframe and other IBM systems. Are there prerequisites for tape drive sharing?continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.