Optimizing SUSE Linux performance for VMware environments

Learn how to optimize Suse Linux Enterprise Server (SLES) on VMware by optimizing VMware and SUSE. For better performance, switch to VMware's paravirtualization disk driver and adjust I/O settings in SUSE along with changing the journal settings.

Since VMware now offers SUSE Linux Enterprise Server (SLES) for free to its Vsphere customers, many companies have implemented SUSE for their Linux needs. However, even if you have installed the SUSE kernel that was developed for use in VMware environments, you can still gain a lot by further optimizing performance as described in this tip.

Optimizing performance for SUSE in VMware environments require two steps. First, you have to apply optimization techniques in the VMware environment. Then, after optimizing the host environment, you can use some additional optimization techniques in the virtual machine.

Measuring performance optimization
Some advanced techniques are available to measure the results of your performance optimization efforts. One is the Bonnie scripts, a set of shell scripts that help you measure read and write performance in a variety of ways. If you really want to get the most out of it, it's always a good idea to use Bonnie. But if you just want a basic idea of how well your virtual machine is doing, you can use a much simpler trick that gives you a pretty good impression.

To get a basic idea of current performance, you can use the following command from a Linux shell environment:

 time dd if=/dev/zero of=/1gfile bs=1M count=1024

Using this command, you'll create a 1 GB file at the location that you've specified. Put it in the root file system for example to find out the performance of the root file system, or put it on a mounted SAN device to measure performance of the SAN device. The result of this command will look as follows:

 [email protected]:/# time dd if=/dev/zero of=/1gfile bs=1M count=1024
 1024+0 records in
 1024+0 records out
 1073741824 bytes (1.1 GB) copied, 8.86777 s, 121 MB/s
 real      0m8.931s user      0m0.000s sys       0m1.890s

In the last time, you can see how much time the kernel has really spent handling this command. In line that reads “real 0m8.931s”, you can see how much time your computer has taken to accomplish this task. This real time measures the time from the moment you started the task, until it finished. In the meantime, the scheduler has switched between other tasks also. So if your server is very busy with other tasks, this parameter doesn't give a very accurate reading. But, if you don't have any other significant processes running on your computer, it provides you with a pretty good idea. The following formula applies: real – sys = overhead.

You will at all times have some overhead, and the figures shown above are actually quite good. There can however also be a huge difference in the two numbers that are shown, and this often is indeed the case when virtualizing SUSE in VMware. For instance, the real time might be well over a minute, whereas the system time still is about two seconds to write the Gigabyte file!

Optimizing VMware
Optimizing VMware for best performance in SUSE Linux is actually quite easy. By default, VMware installs an LSI Logic Controller to offer a virtualized disk device. This is a controller that captures and emulates all data that are generated by the virtual machine, and to do that a lot of work is involved. As the solution to this problem, you should replace the LSI Logic Controller with the VMware paravirtual disk driver. As the VMware paravirtual disk driver is optimized with the SUSE VMware kernel, you'll notice a huge performance gain after applying this driver. Don't forget to install VMware tools as well. These ensure that the best drivers are used in the virtual machine and thereby maximize performance from the VMware side.

Optimizing SUSE
In SUSE Linux there are also some options that are good candidates for optimization. First of them, is the behavior of the I/O-scheduler. This element of the Linux operating system determines how the disk controller will work. There are four settings:

cfq: this stands for Complete Fair Queueing, and it means that the controller will use the best average settings, which works well in a system where read and write requests are more or less equally balanced and bandwidth is equally distributed between processes.

anticipatory: with this setting, the I/O-scheduler tries to anticipate the next read request, which means that it will read some blocks ahead. This means that memory buffers are allocated for read optimization, which is fine for read intensive environments, but also means that write request will be negatively affected. In recent kernels (2.6.33 and later), this setting is deprecated in favor of cfq.

deadline: with this setting, the I/O-scheduler waits as long as it can before actually writing buffers to disk. Using this setting, the scheduler tries to behave as a real time scheduler as well as it can, and this option is also recommended for database environments.

noop: because many modern disk controllers, RAID controllers or SAN devices take care of read/write optimization already, it may make sense not to do any Linux-based optimization at all. Therefore, you should at least try this option to find out how your controller storage helps you in optimizing I/O-requests.

The I/O-scheduler settings are written to a file in the sys filesystem. In the directory /sys/block, you'll find a subdirectory for every storage device that exists on your server, and within that directory, you'll find the file queue/scheduler. Use cat to find out its current setting:

 [email protected]:/# cat /sys/block/sda/queue/scheduler
 noop anticipatory deadline [cfq]

As you can see, the I/O-scheduler is set to Complete Fair Queueing. To find out if you can benefit from one of the other parameters, just echo the new parameter to the configuration file:

 echo deadline > /sys/block/sda/queue/scheduler

Next, you should do your tests to see how performance has been affected. It's always a good idea to try all of the four parameters to find out which fits the workload of your server best. Then you can include the new setting in the boot procedure, for example, by including the echo command shown above in the /etc/init.d/boot.local file. This ensures that the new setting is activated every time your server reboots.

Optimizing the journal
It may also be useful to optimize performance in the file system journal. All modern Linux file systems are using a journal to make data recovery easy after a server crash. The basic concept of a journal is that before a write to a file occurs, the journal takes care of logging the transaction so that if it fails it can easily be rolled back. But if your server is very write intensive, the default journal setting may negatively impact write performance. On the other hand, if your server is very read intensive, you don't have to care about the journal settings, as the journal is not involved in read transactions.

In the case of the heavily write-oriented server, you can use the data=journal option while mounting the file system involved through fstab. Using this option gives protection to some extent, but it is lightweight and makes sure that you lose the least possible time while writing files. The following line shows what the line in /etc/fstab could look like

 /dev/sdg1        /           ext3     user_xattr,data=writeback      1 1

After applying this change, you should also make sure to restart your server, which will activate the changes for you.

Here we’ve discussed how to optimize performance in SUSE Linux Enterprise Server on VMware Server. Applying the performance related parameters discussed in this article will optimize the storage channel, and is thus more than likely to give you a better performance for your VM's.

ABOUT THE AUTHOR: Sander van Vugt is an author and independent technical trainer, specializing in Linux since 1994. Vugt is also a technical consultant for high-availability (HA) clustering and performance optimization, as well as an expert on SLED 10 administration.

Dig Deeper on Linux servers