BACKGROUND IMAGE: sinemaslow/iStock

E-Handbook:

Ensure a load-balancer failover in a virtualized environment

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Modern load-balancer options reflect the virtualized environment

Load balancing means something different in a virtualized IT department. Operational agility is now a mainstay, and it will only get more complex in the future.

Load balancers have traditionally been discrete appliances interposed between the WAN and a server farm. Their purpose was to distribute remote accesses across the server farm -- or in other words, level-load the servers.

That approach worked fine for a fixed structure with a block of dedicated servers, but today's IT environment has moved on, and businesses need their load balancers to reflect virtualized infrastructures and cloud systems.

The New Age agile balancer

The key to modern load-balancer options is operational agility. Today's workloads are dynamic, with daily load variations and frequent spikes. Load balancing needs to recognize this and must extend beyond fixed asset control and handle virtual instances of applications.

Virtualization has opened up a new universe of capabilities for balancing. Efficiency improves tremendously when the balancer can increase and decrease the number of instances of a given application. There is no longer a fixed amount of horsepower or a set number of servers. The dynamic range of the resource pool can go as low as a single instance or as high as the whole server cluster.

To take advantage of this dynamic range, the balancer must tie in to the cluster's orchestration software. Enable instance control so that the balancer receives additional responsibilities. When a server fails, the balancer traditionally rebalances the workload over the remaining servers; this can cause work in progress to become less responsive or even require a restart (an online purchase, for instance). New Age load-balancer options can fire up replacement instances as well as rebalance the load, avoiding all but a momentary drop in performance, while a well-designed app can restore work in progress.

This agile response to shifts in load or instance failures is a major cost-saver, both for those with an in-house cluster and public cloud users. Virtual instances are usually a small segment of a CPU core's capabilities. This granularity efficiently matches resources with the current workload and avoids having idle servers at slow load times.

Load-balancer platforms

Today's load-balancer options are no longer a fixed, single-function appliance. The ubiquity of x64 commercial off-the-shelf servers has opened a software-only market. This initially allowed balancer software to be integrated onto a server engine of the user's choice, and users get what they need from a wide range of performance and configurations clouds.

As the balancer has evolved to meet cloud needs, the platform requirement has also evolved. The first logical step is to house the balancer program in a virtual instance, giving it the same sort of resiliency as any app would have. Furthermore, it makes sense to allow the load balancer to be replicated for failover purposes, while opening up operational scale-out significantly.

Instance-based solutions have inevitably led to load balancing as a service (LBaaS), where the balancer is rented for a monthly fee per instance. Companies such as DigitalOcean offer LBaaS for as low as $20 per month, and all the major cloud vendors provide a load-balancer service in their clouds at similar prices.

A single code base covers load-balancer options that are appliance-based, software-only and LBaaS -- which means vendors can offer all three services. However, many vendors concentrate on the software-only or service approaches, and stay out of the platform business altogether.

Additional services

The balancer is a one-to-many switch, which opens possibilities for software extras. Vendors commonly offer optional encryption of the WAN-side traffic and data compression, for two examples.

Thanks to the rise of software-defined infrastructure -- particularly networking -- expect tomorrow's balancers to tightly integrate with software-defined technologies.

These are valuable if the ongoing data stream passes though the balancer, and it concentrates the key management issues as well. On the other hand, a single gateway with the limited performance of today's encryption software might not make sense for use cases with heavy levels of data transfer, such as media delivery. In this case, encrypt at the server instance. Or route the transferred packets through an encryption microservice in another instance.

Balancer tools with rule-based automation might be much more useful. Use the tools to create a larger instance pool in anticipation of a daily workload. You'll want tools that can work with the orchestration system to monitor traffic levels at the balance server pool. This should help avoid response slowdown, a critical issue in today's multi-choice mobile environment.

Future potential for balancers

Containers are a hot topic in today's IT world, and balancing is evolving to handle them in stride. Containers make virtual instance creation more agile, with fast startup times compared with hypervisor instances. This plays well with the New Age balancer approach, which aims for high agility and demand responsiveness.

Expect the rules-based engines to become more complex, with an emphasis on anticipatory response, for example. Most load balancers restrict their operations to a single cluster or cloud zone. With recent whole-zone crashes in mind, future load-balancer options may integrate more with WAN services to make cross-zone balancing and resilience an option.

Thanks to the rise of software-defined infrastructure -- particularly networking -- expect tomorrow's balancers to tightly integrate with software-defined technologies. This kind of load-balancer failover can help avoid bottlenecks and in-path optimization.

Making your choice

There are many vendors in the market, from the traditional appliance makers to newbie startups. Products range in complexity, and businesses should match their needs to the offers. Some companies might need a balancer with encryption and compression, for example.

A business that uses a public cloud might pursue the LBaaS option, whether from their cloud provider or a third party. With hybrid clouds, LBaaS is still an option, but consider if there might be performance implications. The object of the balancer in this hybrid configuration is efficient, automated cloud bursting.

For the advanced user, big data is a challenge with load balancing. Some software tools, such as Splunk, have their own load balancer built in. In fact, Splunk recommends not using an external balancer to distribute incoming data streams.

The long list of vendor choices means you're guaranteed to find load-balancer options that match your use cases, but first do a total-cost-of-ownership analysis and some pre-selection based on features to trim the choices. 

Next Steps

Pros and cons of three Linux load balancers

How public cloud can benefit your organization

Company adjusts to open source load balancers

This was last published in June 2017

Dig Deeper on Emerging IT workload types

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How has your business changed the way it uses load balancers?
Cancel

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close