Tip

Configuring load balancing with Linux Virtual Server

If you have busy servers, a load balancer may help achieve optimal resource utilization. By implementing a load balancer, you can distinguish between a front-end server,

Requires Free Membership to View

which handles all client requests entering the network, and back-end servers that handle the real workload. You can implement load balancers as a hardware appliance or save money by using Linux as the load balancer platform. In this tip, you'll learn how to implement a load balancer on Linux.

First, you’ll need Linux Virtual Server, which includes the load balancer on the front-end server and the back-end servers that handle the workload. It provides access to a wide range of services, such as Web, email, proxy-cache, FTP and more.

Understanding Linux Virtual Server
The key component on Linux Virtual Server is the ip_vs kernel module, which implements load balancing on the transport layer of the Open Systems Interconnection (OSI) model. The load balancer, which offers ip_vs services, is also referred to as the director.

The director is responsible for forwarding packets to the back-end nodes in the Linux Virtual Server. Three different methods forward packets:

  • Network address translation (NAT) -- This works well, but when using it, the load balancer must change the addresses for all incoming packets.
  • IP tunneling -- This process is slightly more complicated than NAT. The director encapsulates every packet it receives in another IP packet, sending them to one of the back-end servers. The advantage here is that the back-end server can send its reply directly to the clients, instead of through the load balancer.
  • Direct routing -- Here, the director sends the unmodified packet directly to the target server, where a specific configuration is required for the method to work.

The easiest way to set up the load balancer is through NAT. In the below example, the ipvsadm command is used, which allows you to easily create a working load balancer from the command line. Before starting the configuration, correctly set up the network. Since we're covering the NAT method, you'll need the following networking elements:

  • The load balancer needs an internal and external network interface, and each has to be in different subnets.
  • To test the configuration, you need a client that communicates through the external network interface on the load balancer.
  • The back-end servers must communicate with the load balancer’s internal network interface.  


Figure 1: Example of the LVS configuration

As seen in Figure 1, the load balancer in this configuration is going to be used as a router. After setting up the internal and external IP addresses, you must configure the load balancer server as a router. This is easily accomplished by enabling routing in the /proc file system. To enable routing, use the following command:

echo 1 > /proc/sys/net/ipv4/ip_forward

If you want to save this setting to be automatically applied after a reboot, it's good to also add the following line to the /etc/sysctl.conf file:

net.ipv4.ip_forward=1

After configuring networking, you can type the rules that the load balancer should use. To create the configuration in Figure 1, enter the following command on the load balancer:

ipvsadm -A -t 10.0.1.1:80

In this first line, you’ve added the new service (-A), told the service that it should work on the TCP protocol and specified that with this protocol, it should address port 80. Next, you need to bind the real services – tell the load balancer which back-end services to use -- by specifying which external server to use with the -r command. You would use the -m command to tell the load balancer to use NAT as the communications protocol. These tasks are accomplished through the following commands

ipvsadm -a -t 10.0.1.1:80 -r 10.0.0.10:80 -m
ipvsadm -a -t 10.0.1.1:80 -r 10.0.0.20:80 -m
ipvsadm -a -t 10.0.1.1:80 -r 10.0.0.30:80 -m

After entering these, you should have a working configuration. It’s easy to test whether you have followed these steps correctly. Go to the test computer and access the load balancer’s HTTP port. The load balancer will ensure the request is forwarded to the appropriate server.

Now you've learned how to set up a pilot Linux Virtual Server load balancer environment. Using this test environment, you can easily set up a proof-of-concept configuration and see if you like Linux Virtual Server’s behavior. After testing the server thoroughly, you can work out additional details, including which protocol to use and whether you want to configure your solution for high availability.

ABOUT THE AUTHOR: Sander van Vugt is an independent trainer and consultant based in the Netherlands. Van Vugt is an expert in Linux high availability, virtualization and performance and has completed several projects that implement all three. He is also the writer of various Linux-related books, such as Beginning the Linux Command Line, Beginning Ubuntu Server Administration and Pro Ubuntu Server Administration.

This was first published in January 2011

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.