What are the biggest mistakes in achieving redundant data? How much does IT expertise really affect data center...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
No single redundancy plan fits every company's needs. Radically different business demands and regulatory requirements add complexity to data redundancy decisions. Proper hardware and software choices only pay off when IT professionals possess keen insight into the data and its implications for the business.
One of the biggest threats to data redundancy is unclear business goals and lack of IT expertise. It's easy to acquire and deploy tools, but unless those tools are configured properly and used with business policies and objectives in mind, the value of data redundancy diminishes.
Often, IT teams simply protect everything the same way, which can be detrimental to the business. Not all business data is created equal, so protecting all data equally can be costly and inefficient.
For example, of the 10 applications a certain business uses, only five will be important enough to replicate remotely and only two of those may need frequent updates. Replicating all 10 applications frequently will use far more bandwidth and storage than what the business truly needs. IT professionals must understand the data being protected and its value to the business to protect it appropriately.
Other data redundancy considerations
Regulatory compliance and industry guidelines for data storage and retention affect almost every business. IT professionals must ensure compliance for data center redundancy by working with the corporate compliance officer and legal counsel. For example, some redundant data is subject to a requisite retention period and must be destroyed when that time expires.
Surprisingly few organizations test the recoverability of protected data. Data protection strategies should always include recoverability testing; after all, redundant data is useless if you cannot recover or use it when trouble occurs. Recoverability tests can involve restoring snapshots from the redundant storage array to the main storage array or launching redundant virtual machines on test servers to verify that the workloads are valid. Routinely test as part of your data protection scheme.
Capacity planning must extend to the remote storage subsystem, with management tools and processes to support it. If IT administrators monitor storage use and growth patterns on the remote storage system, they can upgrade capacity as needs evolve, before the remote storage array runs short of storage capacity and causes data protection errors.
Related Q&A from Stephen J. Bigelow
SCOM 2012 is Azure's native monitoring platform, but these third-party cloud management tools give IT shops more control over their Azure ...continue reading
Choosing the wrong open source cloud infrastructure tool can take a toll on your IT environment. So which one is right for my business?continue reading
The benefits of a software-defined technology are more well-known than the drawbacks. Make sure you're familiar with the limitations before taking ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.