How much remote data storage, network and server hardware is enough to enable redundancy between two data cent...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The simplest answer is "It depends;" hardware requirements hinge on your ultimate goals for disaster recovery.
Redundant data copies at a remote data center let organizations quickly recover damaged or malfunctioning workloads or even launch workloads on remote systems for business continuity. Successful data redundancy requires a careful consideration of remote storage and networking and server hardware deployment needs, based on the DR strategy.
Remote data storage is usually the most pressing issue for redundancy planning. A remote site requires enough storage to retain all data deemed important enough to be replicated or backed up to a second site. The remote storage deployment depends on what data needs to be stored, how long the data needs to be stored and how much the data can be mitigated or compressed.
There are options for how to protect important data. Redundancy might mean occasional or frequent virtual machine (VM) snapshots, traditional backup approaches or some mix of strategies and retention times. Storage technologies built into the subsystem -- thin provisioning and data duplication -- will help mitigate remote storage system requirements.
A business must be able to move data copies between sites, so networking and connectivity are also important considerations. Full remote backups can overwhelm network bandwidth if administrators don't manage bandwidth use and reduce the data volume with technologies such as differential or incremental snapshots.
In some cases, a comprehensive remote data protection strategy may require networking upgrades or architectural changes at either or both data centers. Data cannot be protected or synchronized when network connectivity is disrupted, so consider the business impact of connectivity problems. Some workloads may simply be too important to risk in real-time replication.
If you only intend to protect and store data in a cold redundancy center, there are few -- if any -- server requirements. Servers are necessary if your disaster recovery plan includes operating workloads at the remote data center. For this warm or hot redundancy, servers should be capable of running applications and offer enough resources to support the protected data. Virtualization abstracts workloads from the underlying server hardware, allowing workloads like VM snapshots to run on an array of server hardware without traditional hardware duplication at the redundant site.
Dig Deeper on Enterprise data storage strategies
Related Q&A from Stephen J. Bigelow
OpenStack scheduled numerous hypervisors for deprecation in 2014's OpenStack Icehouse, but no others are scheduled for future releases, up to and ...continue reading
There are many differences between OpenStack-supported hypervisors, but only some features are mandatory. Adopters need to review feature sets as ...continue reading
VIC supports container creation and image deployment through virtual container hosts, which suit well-proven workloads, or Docker container hosts, ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.