Problem solve Get help with specific problems with your technologies, process and projects.

Build a high-availability system with containers, data mirroring

Traditional backup systems are no longer the go-to data recovery method for IT. Learn how to create high availability with containers, data mirroring and cloud.

IT downtime can have a significant impact on any business. Traditionally, this made the speed of data recovery...

from backups a primary focus for IT teams. However, even as backup tools improved, recovery times were still not good enough for many organizations. In addition, a high-availability system used to be out of the financial reach of most organizations.

Now, there is a different, more cost-effective way to provide data center high availability: data mirroring alongside the use of cloud computing and containers.

Container options

Containers can help solve some of the big challenges around a high-availability system. For example, assume you have data that is fully mirrored to a secondary site. Now, assume the primary site fails. You have full access to your data, but what about the application? Even if you can fail over to the mirrored data, it is useless without the application. Now, you either have to wait while you provision the application on the mirrored site, or pay what could be exorbitant amounts of money to have a live version of the application running on that site -- just in case.

Containers, however, can hold an entire application in a small, single system. While virtual machines carry everything in the whole stack, from the operating system upward, containers only carry what they need and share the underlying operating system with other containers.

For organizations that are less tolerant of downtime, it's possible to spin the containers constantly, rather than store them and use them only when necessary.

In the example above, IT teams could store a collection of containers at low cost on a secondary site. Then, if the primary site has issues, they can spin up the application container within minutes to access the mirrored data.

When you choose a secondary site in the public cloud, the cost of cloud storage is so low that the cost of container storage, in this example, would be insignificant. When you actually need to spin up the containers, the cost becomes appreciable, but access to a working system still makes that cost pale in comparison to the full business cost of downtime.

For organizations that are less tolerant of downtime, it's possible to spin the containers constantly, rather than store them and use them only when necessary. The cost will be higher, but now if the primary site fails, the system can smoothly fail over to the backup site in almost real time. You can also minimize the cost by paying for elastic resources; an unused, spinning container won't use many CPU or network resources. You'll only need to increase resources when the primary site fails and failover occurs.

Data mirroring challenges

Mirroring of data is, unfortunately, not as easy as it seems. Distance is a major issue; the further away the mirror site, the more latency exists and the harder it is to maintain data fidelity. Also, if data corruption occurs, the last thing you want is to mirror that corruption. 

If your organization requires business continuity via a constant high-availability system, you'll have to pay for advanced data mirroring services. Cloud service providers, such as Amazon Web Services and Microsoft Azure, now have high-speed data connects that enable long-distance data mirroring. However, snapshots with data backup could be a less expensive option. A snapshot creates a read-only copy of data from a live system. It does not need the live system to be locked or taken down, and it's highly efficient in CPU and I/O utilization. There are different approaches to snapshots, but a copy-on-write approach is the best option for the requirements described above. Snapshots capture every write to a data system and writes to both the primary storage system and the remote system as a background task. Through these means, you can rapidly spin up a snapshot data set alongside a container to create a running system on a secondary site.

Containers have become far more data-aware, as well. For example, they can hold a data volume that is a persistent store. Through the use of container orchestration systems, you can synchronize data snapshots from a primary site to the remote container. At the moment, this can be a bit difficult to achieve in a high-availability system, but it is worth watching to see how the market develops.

Next Steps

Understand the role of containers as a service

IoT takes hold of data center uptime debate

How data center outages can boost reliability

This was last published in June 2017

Dig Deeper on Virtualization and private cloud

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What systems are currently in place in your data center to maintain high availability?
Cancel

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close