How much remote data storage, network and server hardware is enough to enable redundancy between two data cent...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
The simplest answer is "It depends;" hardware requirements hinge on your ultimate goals for disaster recovery.
Redundant data copies at a remote data center let organizations quickly recover damaged or malfunctioning workloads or even launch workloads on remote systems for business continuity. Successful data redundancy requires a careful consideration of remote storage and networking and server hardware deployment needs, based on the DR strategy.
Remote data storage is usually the most pressing issue for redundancy planning. A remote site requires enough storage to retain all data deemed important enough to be replicated or backed up to a second site. The remote storage deployment depends on what data needs to be stored, how long the data needs to be stored and how much the data can be mitigated or compressed.
There are options for how to protect important data. Redundancy might mean occasional or frequent virtual machine (VM) snapshots, traditional backup approaches or some mix of strategies and retention times. Storage technologies built into the subsystem -- thin provisioning and data duplication -- will help mitigate remote storage system requirements.
Other data redundancy concerns
A business must be able to move data copies between sites, so networking and connectivity are also important considerations. Full remote backups can overwhelm network bandwidth if administrators don't manage bandwidth use and reduce the data volume with technologies such as differential or incremental snapshots.
In some cases, a comprehensive remote data protection strategy may require networking upgrades or architectural changes at either or both data centers. Data cannot be protected or synchronized when network connectivity is disrupted, so consider the business impact of connectivity problems. Some workloads may simply be too important to risk in real-time replication.
If you only intend to protect and store data in a cold redundancy center, there are few -- if any -- server requirements. Servers are necessary if your disaster recovery plan includes operating workloads at the remote data center. For this warm or hot redundancy, servers should be capable of running applications and offer enough resources to support the protected data. Virtualization abstracts workloads from the underlying server hardware, allowing workloads like VM snapshots to run on an array of server hardware without traditional hardware duplication at the redundant site.
Dig Deeper on Enterprise data storage strategies
Related Q&A from Stephen J. Bigelow
Long-distance vMotion eliminates practical geographical limitations by raising the round-trip latency limit.continue reading
Expert Steve Bigelow explains how instant clone technology helps VMware's vSphere Integrated Containers supply a baseline Linux OS and, as a result, ...continue reading
In order to ensure successful App-V deployment and get the most out of application virtualization, users should study up on App-V’s server and ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.