Data is the lifeblood of every modern business. Don't protect it, and you'll not only lose your data, but also...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
your money and reputation.
Simply running backup or disaster recovery (DR) tools isn't enough to keep corporate information safe. Simple oversights, configuration errors and even connectivity or third-party availability problems strand data, and potentially compromise the data protection and disaster preparedness of any business.
There is no uniform definition for stranded data. It can mean workloads and associated data is lost when not adequately backed up or protected. In other cases, the backup is created, but unrecoverable. Intelligent management tools and regular testing keep IT teams on top of corporate data backup.
Lost but not forgotten
The most common cause of stranded data is simple administrative oversight. For example, the storage team creates a new logical unit number (LUN) to support a virtual workload, but that LUN is not added to the backup cycle. The application is unaffected, so there is no indication that data is unprotected -- until you need to restore it and there's no backup.
You can also strand data when an application migrates to a new server without updating the backup tool. The tool still thinks the workloads reside on the old server, and continues to work at protecting them there. Most backup tools do report an error when IT staff misses such configuration changes. It's up to you to remediate the situation quickly.
Data backups are sometimes there, but unrecoverable. As one example, you have corporate data on a tape archive, but update the physical tape drive to a new standard that cannot read these legacy tapes. Or connectivity problems prevent virtual machine (VM) recovery from a remote storage area network, rendering the affected data inaccessible for an unpredictable time period.
Stranded in the cloud
Stranded data was rare in traditional data centers with limited application deployments across physical servers. Workloads required more planning and took longer to provision and deploy; data had a more tangible presence in the data center that made it easy to keep in sight.
Today, virtualization has made the data center a far more fluid environment where VMs are created, migrated and even removed in a matter of moments instead of months. Cloud computing increased this fluidity, even allowing end users to create workloads on-demand with little -- if any -- oversight. VMs can be created with such speed and ease that it's easy to overlook the implications of data protection.
The concept of workload tiering exacerbates data stranding problems. Traditional IT applied the same corporate data backup strategy to every application and folder in the same way. IT planners have started varying data protection resources with the relative importance of each workload. Mission-critical workloads, for example, get frequent snapshots, while low-priority VMs need only occasional backups. This kind of prioritization optimizes backup performance and storage use, but also complicates scheduling, which is an opportunity for oversights.
Prevent and overcome stranded data
It takes thoughtful policies and smart tools to ensure that every LUN or VM is properly protected and recoverable.
Don't try to implement or maintain manual backup processes in a fluid, virtualized or cloud data center. Instead, data center management tools can help to identify unprotected LUNs or workloads. IT staff should focus on implementing suitable data protection by prioritizing unprotected workloads, folders and files while putting data protection levels in line with each workload's importance.
Some tools automate data protection as part of the LUN or VM creation process. They also can accommodate workload migrations without direct administrative intervention. Automation is particularly beneficial in environments that support user-based provisioning and other self-service features.
Closing holes in the backup regime is only half of the battle -- data has to be recoverable from those stored backups. Verify and test disaster recovery on a regular basis. Virtualization makes test restoration far easier than traditional physical infrastructures, because you can restore VM snapshots to almost any test server without affecting production systems or data stores.
Full backup or incremental, differential or incremental-forever? Here's the difference
Always backing data up in a continuous setup
Test resiliency and recovery like a boss