If you have busy servers, a load balancer may help achieve optimal resource utilization. By implementing a load balancer, you can distinguish between a front-end server,
First, you’ll need Linux Virtual Server, which includes the load balancer on the front-end server and the back-end servers that handle the workload. It provides access to a wide range of services, such as Web, email, proxy-cache, FTP and more.
Understanding Linux Virtual Server
The key component on Linux Virtual Server is the ip_vs kernel module, which implements load balancing on the transport layer of the Open Systems Interconnection (OSI) model. The load balancer, which offers ip_vs services, is also referred to as the director.
The director is responsible for forwarding packets to the back-end nodes in the Linux Virtual Server. Three different methods forward packets:
- Network address translation (NAT) -- This works well, but when using it, the load balancer must change the addresses for all incoming packets.
- IP tunneling -- This process is slightly more complicated than NAT. The director encapsulates every packet it receives in another IP packet, sending them to one of the back-end servers. The advantage here is that the back-end server can send its reply directly to the clients, instead of through the load balancer.
- Direct routing -- Here, the director sends the unmodified packet directly to the target server, where a specific configuration is required for the method to work.
The easiest way to set up the load balancer is through NAT. In the below example, the ipvsadm command is used, which allows you to easily create a working load balancer from the command line. Before starting the configuration, correctly set up the network. Since we're covering the NAT method, you'll need the following networking elements:
- The load balancer needs an internal and external network interface, and each has to be in different subnets.
- To test the configuration, you need a client that communicates through the external network interface on the load balancer.
- The back-end servers must communicate with the load balancer’s internal network interface.
Figure 1: Example of the LVS configuration
As seen in Figure 1, the load balancer in this configuration is going to be used as a router. After setting up the internal and external IP addresses, you must configure the load balancer server as a router. This is easily accomplished by enabling routing in the /proc file system. To enable routing, use the following command:
echo 1 > /proc/sys/net/ipv4/ip_forward
If you want to save this setting to be automatically applied after a reboot, it's good to also add the following line to the /etc/sysctl.conf file:
After configuring networking, you can type the rules that the load balancer should use. To create the configuration in Figure 1, enter the following command on the load balancer:
ipvsadm -A -t 10.0.1.1:80
In this first line, you’ve added the new service (-A), told the service that it should work on the TCP protocol and specified that with this protocol, it should address port 80. Next, you need to bind the real services – tell the load balancer which back-end services to use -- by specifying which external server to use with the -r command. You would use the -m command to tell the load balancer to use NAT as the communications protocol. These tasks are accomplished through the following commands
ipvsadm -a -t 10.0.1.1:80 -r 10.0.0.10:80 -m
ipvsadm -a -t 10.0.1.1:80 -r 10.0.0.20:80 -m
ipvsadm -a -t 10.0.1.1:80 -r 10.0.0.30:80 -m
After entering these, you should have a working configuration. It’s easy to test whether you have followed these steps correctly. Go to the test computer and access the load balancer’s HTTP port. The load balancer will ensure the request is forwarded to the appropriate server.
Now you've learned how to set up a pilot Linux Virtual Server load balancer environment. Using this test environment, you can easily set up a proof-of-concept configuration and see if you like Linux Virtual Server’s behavior. After testing the server thoroughly, you can work out additional details, including which protocol to use and whether you want to configure your solution for high availability.
ABOUT THE AUTHOR: Sander van Vugt is an independent trainer and consultant based in the Netherlands. Van Vugt is an expert in Linux high availability, virtualization and performance and has completed several projects that implement all three. He is also the writer of various Linux-related books, such as Beginning the Linux Command Line, Beginning Ubuntu Server Administration and Pro Ubuntu Server Administration.
This was first published in January 2011