What are the biggest mistakes in achieving redundant data? How much does IT expertise really affect data center redundancy?
No single redundancy plan fits every company's needs. Radically different business demands and regulatory requirements add complexity to data redundancy decisions. Proper hardware and software choices only pay off when IT professionals possess keen insight into the data and its implications for the business.
One of the biggest threats to data redundancy is unclear business goals and lack of IT expertise. It's easy to acquire and deploy tools, but unless those tools are configured properly and used with business policies and objectives in mind, the value of data redundancy diminishes.
Often, IT teams simply protect everything the same way, which can be detrimental to the business. Not all business data is created equal, so protecting all data equally can be costly and inefficient.
For example, of the 10 applications a certain business uses, only five will be important enough to replicate remotely and only two of those may need frequent updates. Replicating all 10 applications frequently will use far more bandwidth and storage than what the business truly needs. IT professionals must understand the data being protected and its value to the business to protect it appropriately.
Other data redundancy considerations
Regulatory compliance and industry guidelines for data storage and retention affect almost every business. IT professionals must ensure compliance for data center redundancy by working with the corporate compliance officer and legal counsel. For example, some redundant data is subject to a requisite retention period and must be destroyed when that time expires.
Surprisingly few organizations test the recoverability of protected data. Data protection strategies should always include recoverability testing; after all, redundant data is useless if you cannot recover or use it when trouble occurs. Recoverability tests can involve restoring snapshots from the redundant storage array to the main storage array or launching redundant virtual machines on test servers to verify that the workloads are valid. Routinely test as part of your data protection scheme.
Capacity planning must extend to the remote storage subsystem, with management tools and processes to support it. If IT administrators monitor storage use and growth patterns on the remote storage system, they can upgrade capacity as needs evolve, before the remote storage array runs short of storage capacity and causes data protection errors.
Dig deeper on Data Center Disaster Recovery
Related Q&A from Stephen J. Bigelow
There are several BIOS settings you may need to change on your host server to ensure that hypervisors will run correctly.continue reading
VMware touts its hybrid cloud offering, vCloud Air, as a seamless way for administrators to move workloads from their vSphere infrastructure to the ...continue reading
Despite its relative newness to the market, VMware's public cloud infrastructure offering could appeal to shops already using the company's vSphere ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.