Data Center.com

Tweak Linux swap to improve server performance

By Sander van Vugt

Cache and swap space are powerful tools when doing Linux performance tuning. Many think swap space isn’t useful and therefore configure a minimal amount. But if you want optimal performance on your servers, you need a better approach to managing cache and swap space.

On any operating system, swap is used as additional memory. If a system runs out of physical RAM, it can start allocating swap, which is emulated memory on hard disk. This generally is slower, but it’s always good to have some swap. If your server runs out of memory — that is, RAM or swap — it can crash.

Another related item is cache. Cache is used to store regularly used files in memory. Using cache is preferred to fetching data from the hard disk, which is much slower. A system with insufficient cache needs to fetch data from disk more frequently, resulting in decreased performance.

Before going in depth about swap configuration, there are a few general things to know about swap. Some use cases have very particular requests about swap – such as servers that run Oracle or SAP and applications that have set amounts of swap. If your server runs Oracle or SAP, just apply their recommendations to avoid problems with these mission-critical applications.

In cases where neither SAP nor Oracle is used, generic swap usage rules apply. This means that Linux will use swap only if it runs out of memory. For a more sophisticated understanding of swap use, it’s important to understand the counterpart of swapping as well. This is the situation where a server has lots of available memory. If that is the case, this memory is used as cache. If you really want to understand swap, you should understand cache as well.

Cache versus swap in Linux performance tuning
Cache is RAM used to temporarily store recently used files. If a request comes in to read a file, typically this file needs to be fetched from the server’s hard disk. Once fetched, it is copied to the RAM and from there it is served to the client that requested the file. In performance terms, copying the file from hard disk to RAM is a very expensive action. To get good performance on a server, there must be enough cache available. In general, shoot for about 20% of the total RAM at least as cache, but more is often better.

The opposite of cache is swap, where inactive memory pages are moved from RAM to disk. By doing so, the server frees up memory to do other things, including the caching of files. By default, the kernel tries to swap out only memory pages that haven’t been used recently, but still need to be available to the process that has allocated it. The kernel can easily do that, because when a process requests memory, it normally requests much more than the amount of memory it really needs. This is in case the process needs access to more memory later; it's good for performance to already have a claim on that memory. Since this memory isn’t actually used, it is also safe to move it to swap.

There is a close relation between the amount of swap and the amount of cache a server has available. The starting point of an optimization of the amount of swap on your server is that at all times there should be at least 20% of the total RAM as cache; 30% is even better. To accomplish this goal, you need to make sure the ability of the server to move memory pages to swap is big enough. If you are below the minimal amount of cache you want to have, increase the likelihood of your server to swap. To identify the amount of cache currently in use on your server, use the free -m command. Your server must be running for at least a few hours to get a good impression of the amount of RAM that is typically available for caching.

Use free -m to determine the amount of cache

linux-s3w6:~ # free -m

  total used free Shared buffers cached
Mem:

993

721

271

0

25

272

-/+ buffers/cache:

423

569

 

 

 

 

Swap:

1983

0

1983

 

 

 

linux-s3w6:~ #

If you’re using the free -m command, you can see the total amount of RAM (993 MB in this case), which is split up in used and available memory. Also, you can see how much of the used memory is in shared memory pages, buffers and cache. In this case, buffers and cache makes up for almost 300MB, which is about 30% of the total available RAM, which means the system where this image was taken is good. On the last line, you can see that swap is available, but not used.

If your server falls below the desired amount – 20% to 30% – of available cache, you first need to determine if it has enough swap space. A modern server typically has a minimum of 2 GB of swap space, and if it has more than 8 GB of RAM, it should also have about 25% of the total RAM available for swapping. Knowing that information will help you make sure that your server has enough swap space before you do any Linux performance tuning.

Increasing ‘swappiness’
If the server falls below 25% of cache, you can use the swappiness parameter to increase it.

By default, the Linux kernel swaps quite aggressively, even if it doesn’t really show on most servers due to the amount of RAM, as the amount of swap in use is 0 or close to it. To influence the swap behavior of a server, an administrator can modify the swappiness parameter. This parameter sets the willingness of the kernel to move memory pages to swap.

Increasing swappiness makes sense if the amount of cache drops too low. By increasing swappiness, memory pages will be moved from RAM to swap sooner, which frees up memory pages and makes them available for use by other things, such as cache.

The swappiness parameter can have a value between 0 and 100, where 0 means “do not swap at all” and 100 means “swap as soon as you can.” By default, the swappiness of the kernel is set to 60. To change this parameter, change the content of the file /proc/sys/vm/swappiness, using the following command:

echo 80 > /proc/sys/vm/swappiness

To make the setting persistent, you should also include the following line in /etc/sysctl.conf:
vm.swappiness = 60

Tweaking a Linux server’s swap settings will result in better I/O handling and therefore much better Linux system performance.

ABOUT THE AUTHOR: Sander van Vugt is an independent trainer and consultant living in the Netherlands. Van Vugt is an expert in Linux high availability, virtualization and performance and has completed several projects that implement all three. Sander is also a regular speaker on many Linux conferences all over the world. He is also the writer of various Linux-related books, such as Beginning the Linux Command Line, Beginning Ubuntu Server Administration and Pro Ubuntu Server Administration.

12 Jul 2012

All Rights Reserved, Copyright 2000 - 2024, TechTarget | Read our Privacy Statement