What are the biggest mistakes in achieving redundant data? How much does IT expertise really affect data center redundancy?
No single redundancy plan fits every company's needs. Radically different business demands and regulatory requirements add complexity to data redundancy decisions. Proper hardware and software choices only pay off when IT professionals possess keen insight into the data and its implications for the business.
One of the biggest threats to data redundancy is unclear business goals and lack of IT expertise. It's easy to acquire and deploy tools, but unless those tools are configured properly and used with business policies and objectives in mind, the value of data redundancy diminishes.
Often, IT teams simply protect everything the same way, which can be detrimental to the business. Not all business data is created equal, so protecting all data equally can be costly and inefficient.
For example, of the 10 applications a certain business uses, only five will be important enough to replicate remotely and only two of those may need frequent updates. Replicating all 10 applications frequently will use far more bandwidth and storage than what the business truly needs. IT professionals must understand the data being protected and its value to the business to protect it appropriately.
Other data redundancy considerations
Regulatory compliance and industry guidelines for data storage and retention affect almost every business. IT professionals must ensure compliance for data center redundancy by working with the corporate compliance officer and legal counsel. For example, some redundant data is subject to a requisite retention period and must be destroyed when that time expires.
Surprisingly few organizations test the recoverability of protected data. Data protection strategies should always include recoverability testing; after all, redundant data is useless if you cannot recover or use it when trouble occurs. Recoverability tests can involve restoring snapshots from the redundant storage array to the main storage array or launching redundant virtual machines on test servers to verify that the workloads are valid. Routinely test as part of your data protection scheme.
Capacity planning must extend to the remote storage subsystem, with management tools and processes to support it. If IT administrators monitor storage use and growth patterns on the remote storage system, they can upgrade capacity as needs evolve, before the remote storage array runs short of storage capacity and causes data protection errors.
Dig deeper on Data Center Disaster Recovery
Related Q&A from Stephen J. Bigelow
Cisco and VMware offer competing products to bring virtualization to the networking world, and each has a different way to implement the technology.continue reading
Using a network virtualization product such as VMware NSX can help administrators who want to be sure data won't leak from VMs.continue reading
VMware's network virtualization offering promises to speed up tasks that traditionally have slowed an enterprise from responding to immediate needs.continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.