Expert Q&A: High-availability technologies and trends

High-availability technologies can help a business maintain operations and stay competitive, but it can be difficult to weigh the alternatives and know what system will fit your data center.

With uptime and availability more important to a business than ever, data centers are exploring high-availability technologies. They’re considering the most cost-effective and least-disruptive options that can be implemented with the infrastructure that is currently in place. But it can be challenging to know what high-availability alternatives are available and what’s coming on the horizon.

In this podcast, Stephen Bigelow, Senior Technology Editor, speaks with Scott Gorcester, founder and president of Moose Logic, an IT solution provider in Washington state. 

 

Play now:
Download for later:

High-availability technologies and trends

  • Internet Explorer: Right Click > Save Target As
  • Firefox: Right Click > Save Link As


Bigelow: Scott, let’s jump right in with a summary. What do you see as the most popular high-availability strategies for a mid-sized to large data center today? Is it still a choice of classic alternatives like server hardware clustering or virtual machine redundancy, or are there other key approaches in the mix?

Gorcester: I think that we still see a lot of server hardware clustering, although I don't know if I would say we're seeing different kinds of clustering. Certainly, virtual machine redundancy is still widely used. I think we are definitely seeing more sophistication with the software and hardware that we use today. I do see products like SAN replication being used with virtual machines, and then some other products, such as Marathon Technologies everRun MX. Those products allow us to use some of our well-known virtualization technologies to provide mirroring of workloads. I think we see some of the traditional methods, and then these are evolving as well into some slick solutions to keep applications up and running.

Bigelow: High availability can pose some problems for data centers. What types of problems or errors do you see occurring in the industry? Where are IT professionals stumbling with high availability and what can they do about it?

Gorcester: I think there's a number of areas. One of the simpler ways to do high availability is simply to have your workloads running in a virtualization environment with the ability to either manually or automatically move workloads around, or to actually have an automatic restart of the workload should it fail. These solutions can be pretty simple solutions, using some of the popular virtualization technologies like VMware or Citrix XenServer.

I think where clients are stumbling is in the proper design and architecture of their systems, and in the proper testing of these systems to make sure there's predictability around what happens in the event of a failure. In some cases, we're seeing these systems not being well designed. So IT shops are deploying these solutions, and then when failures occur the solution may not work as expected.

The other situation that I run into is really considering all of the things that could potentially make your application become unavailable. High availability is not just about the ability to move or restart a workload. There are a lot of things to consider. If we move the workload, how can we access that workload? Within the data center that may be easy, but if they fail over to remote location, it gets more complicated. Proper design and proper testing is key to this. It is important to test all high availability scenarios so that you don't find yourself in one scenario but then our solution is ill-equipped to provide high availability in another.

Bigelow:  You have a lot of expertise in high-availability deployments. What are some best-practices or guidelines for IT staff that are considering or deploying a high-availability solution?

Gorcester: The first thing I would think about are two issues. First, what is the recovery time objective (RTO)? If something goes offline or becomes unavailable, how quickly do we need that application back online? The next thing I'm concerned about is the recovery point objective (RPO). How long is it going to be down? Then when it is back up, at what point did we recover to? Is any data loss acceptable? The closer those two points are to zero, then the more expensive and the more complex those systems will need to be.

Bigelow: Then there’s the matter of cost. What kind of costs are involved in an high-availability deployment, and what can IT planners do to mitigate high costs as they deploy it across their enterprise?

Gorcester: Costs are all over the map. We could be looking at relatively inexpensive solutions, going back to the RTO and RPO. In some conversations we will ask the client, “How long can you afford to be down?” Those answers range from zero to 24 hours. That is a wildly different discussion.

Depending on what the RTO or RPO objectives are, that's really going to drive what kind of solution we might be looking for. Lower-cost solutions might include free copies of hypervisors on equipment that may be new or relatively new. Certainly, older servers don't support a lot of the features of virtualization these days. The simple restart of the workload is relatively inexpensive. Having a system that automatically fails over or computes through a fault, such as Marathon everRun MX, where you can literally have applications mirrored and have zero downtime in the event of a failure, are certainly more expensive. This cost is so subjective. We could be talking about a few thousand dollars to $50,000 or $100,000, and certainly well above that if we get into high-end virtualization and SAN products.

Bigelow: And finally, let’s take a look toward the future. How do you see the current high-availability technologies changing in the next few years, and what new high-availability technologies should we keep an eye on?

Gorcester: A lot is changing all the time. Even today, we’re enjoying some luxuries of technologies that simply weren't available or affordable as much as six to 12 months ago. We see that hardware is becoming more powerful. There are great choices on a reasonable budget. Connectivity is getting less expensive.

In IT we generally see that we're getting more bandwidth and better technology that is becoming more affordable as we move along. Products are becoming more sophisticated, so some of the challenges we faced a year ago are no longer as much of a challenge today. We have products from VMware, Citrix and Microsoft in virtualization. These products go well beyond server virtualization. We see storage virtualization continuing to advance rapidly. Storage is becoming cheaper, easier to work with. Virtualized networks are enabling us to do some amazing things. For high availability, and even to expand the discussion into disaster recovery, the products are evolving very rapidly. I'm very excited about a couple technologies from Marathon. I'm very happy and excited about Citrix, VMware and Microsoft Hyper-V products. We really have an effective toolbox to work with. We see these technologies continuing to evolve, products are getting easier to work together with. It's not that we don't have struggles, but it's a pretty exciting day to be in the IT business and we’re certainly seeing a lot of great solutions come along. In some cases, a problem that exists today could be solved in just a few months.

This was last published in November 2011

Dig Deeper on Enterprise data storage strategies

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close