How much remote data storage, network and server hardware is enough to enable redundancy between two data centers?
The simplest answer is "It depends;" hardware requirements hinge on your ultimate goals for disaster recovery.
Redundant data copies at a remote data center let organizations quickly recover damaged or malfunctioning workloads or even launch workloads on remote systems for business continuity. Successful data redundancy requires a careful consideration of remote storage and networking and server hardware deployment needs, based on the DR strategy.
Remote data storage is usually the most pressing issue for redundancy planning. A remote site requires enough storage to retain all data deemed important enough to be replicated or backed up to a second site. The remote storage deployment depends on what data needs to be stored, how long the data needs to be stored and how much the data can be mitigated or compressed.
There are options for how to protect important data. Redundancy might mean occasional or frequent virtual machine (VM) snapshots, traditional backup approaches or some mix of strategies and retention times. Storage technologies built into the subsystem -- thin provisioning and data duplication -- will help mitigate remote storage system requirements.
A business must be able to move data copies between sites, so networking and connectivity are also important considerations. Full remote backups can overwhelm network bandwidth if administrators don't manage bandwidth use and reduce the data volume with technologies such as differential or incremental snapshots.
In some cases, a comprehensive remote data protection strategy may require networking upgrades or architectural changes at either or both data centers. Data cannot be protected or synchronized when network connectivity is disrupted, so consider the business impact of connectivity problems. Some workloads may simply be too important to risk in real-time replication.
If you only intend to protect and store data in a cold redundancy center, there are few -- if any -- server requirements. Servers are necessary if your disaster recovery plan includes operating workloads at the remote data center. For this warm or hot redundancy, servers should be capable of running applications and offer enough resources to support the protected data. Virtualization abstracts workloads from the underlying server hardware, allowing workloads like VM snapshots to run on an array of server hardware without traditional hardware duplication at the redundant site.
Dig Deeper on Enterprise data storage strategies
Related Q&A from Stephen J. Bigelow
Full virtualization and paravirtualization both enable hardware resource abstraction, but the two technologies differ when it comes to isolation ... Continue Reading
Organizations can cap their hyper-converged infrastructure costs when they deploy the Azure Stack HCI platform, but once they plug into the cloud, ... Continue Reading
You can implement ESXi on ARM -- or other RISC processors -- in micro and nano data centers. A nano data center is more specialized but also more ... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.