What are the biggest mistakes in achieving redundant data? How much does IT expertise really affect data center...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
No single redundancy plan fits every company's needs. Radically different business demands and regulatory requirements add complexity to data redundancy decisions. Proper hardware and software choices only pay off when IT professionals possess keen insight into the data and its implications for the business.
One of the biggest threats to data redundancy is unclear business goals and lack of IT expertise. It's easy to acquire and deploy tools, but unless those tools are configured properly and used with business policies and objectives in mind, the value of data redundancy diminishes.
Often, IT teams simply protect everything the same way, which can be detrimental to the business. Not all business data is created equal, so protecting all data equally can be costly and inefficient.
For example, of the 10 applications a certain business uses, only five will be important enough to replicate remotely and only two of those may need frequent updates. Replicating all 10 applications frequently will use far more bandwidth and storage than what the business truly needs. IT professionals must understand the data being protected and its value to the business to protect it appropriately.
Other data redundancy considerations
Regulatory compliance and industry guidelines for data storage and retention affect almost every business. IT professionals must ensure compliance for data center redundancy by working with the corporate compliance officer and legal counsel. For example, some redundant data is subject to a requisite retention period and must be destroyed when that time expires.
Surprisingly few organizations test the recoverability of protected data. Data protection strategies should always include recoverability testing; after all, redundant data is useless if you cannot recover or use it when trouble occurs. Recoverability tests can involve restoring snapshots from the redundant storage array to the main storage array or launching redundant virtual machines on test servers to verify that the workloads are valid. Routinely test as part of your data protection scheme.
Capacity planning must extend to the remote storage subsystem, with management tools and processes to support it. If IT administrators monitor storage use and growth patterns on the remote storage system, they can upgrade capacity as needs evolve, before the remote storage array runs short of storage capacity and causes data protection errors.
Dig Deeper on Enterprise data storage strategies
Related Q&A from Stephen J. Bigelow
SysOps teams must maintain consistent workload performance, meet compliance and security standards, as well as other IT tasks. AWS Config helps ...continue reading
Microsoft SMT is a free Azure-based tool that offers remote Windows Server management, but there are some limits to this cloud service.continue reading
Logs have been an integral piece of troubleshooting. New enhanced logs boosts vSphere security by providing much more detail about errors and changes.continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.