In RHEL 6, Red Hat will offer kernel virtual machine (KVM) support only. KVM runs as a single module in the kernel, meaning that KVM virtual machines run as processes on top of that module. This makes managing virtual machines a lot easier. To make Linux virtualization work in the most optimal way on your servers, some kernel improvements have been implemented in RHEL 6. This article gives an overview of some of the most important of these improvements.
KVM virtualization offers the next generation of virtualization solutions in RHEL 6. Because it is new and therefore more efficient, Red Hat will drop support for Xen completely in RHEL 6 (but a migration method for Xen virtual machines will be provided).
Optimized process handling
RHEL processes can be organized in buckets: an entity to which CPU and other resources can be assigned. As virtual machines run as processes also, this property is part of how virtual machines run as well.
To have virtual machines work smoothly in a KVM environment, each virtual CPU is treated by the scheduler as a thread. To handle these efficiently, there is a new operating mode in the kernel, the "guest mode." A virtual guest cannot do syscalls directly to the Linux kernel (which is the case for processes in system mode), but it can do hypercalls to talk to the hypervisor. This new mode of operation leverages Linux kernel features such as scheduling, accounting and kernel samepage merging (KSM).
Hardware support improvements
Kernel upgrades are just one part of KVM performance. Another part of it is in the hardware itself. The RHEL 6 kernel will be able to leverage several features that are implemented in the hardware itself. First, with regard to CPU support, there is Extended Pages Table (EPT). This feature helps virtualization work faster with less emulation needing to be done. Next is a feature known as IOMMU in an AMD environment and VT-d in an Intel environment. This allows guests to safely utilize physical I/O devices directly, while at the same time protecting these devices for accidental use by another guest. This protection is necessary, because if a guest writes to a device that already is in use by another guest, it might cause the host to crash.
Another important feature is SR-IOV. This is I/O-virtualization from the PCI bus, which allows safe sharing of real hardware. This especially is important for the network adapter and as a result allows one physical device to provide several virtual devices that can be provided to a guest. The last important hardware feature that can be used in a virtual environment, is NPIV. This allows sharing of storage, which means that every guest can get a slice of a storage device that is offered.
To leverage these hardware improvements, the RHEL kernel been enhanced. A CPU enhancement allows no less than 64 CPU's to be allocated to a guest. Also, there is minimized CPU overhead because of the RCU kernel “locking” feature, which in fact isn't locking at all, but a feature that avoids locking leading to better performance in an SMP environment.
RHEL 6 also has some memory enhancements. First, there are transparent huge pages. That means that huge pages (which allow the kernel to address large chunks of memory instead of 4 KB blocks) are dynamic and therefore no longer have to be planned at system boot. Another kernel feature from which KVM virtualization will benefit, is Kernel Samepage Merging (KSM). This allows multiple VM's to access the same management pages, providing improvement for a Windows guest, which normally clear all memory pages on boot.
The next area of improvement is block I/O. First, there is native AIO and preadv/pwritev. With this, you can group together memory areas before reading or writing them, improving performance of multithreaded environments. The kernel now also has MSI interrupt support. This relates to PCI devices and allows you to work with multiple parallel interrupt lines. Another related change is in the block alignment, which now have a better default value that leads to better performance. The result of all these is that at the kernel level a near-native performance can be offered.
The network interface also benefits from some specific I/O enhancements. The mist important of these is vhost-net, which moves a portion of networking from user space to kernel space. This is beneficial to the virtio drivers, the drivers that are used in KVM guest to offer an improved network performance. These virtio drivers also allow the guest to use the TAP driver to talk to the virtual device. The benefit of using this, is that it hooks directly into the kernel and therefore gives a much better performance than the qemu approach where emulation still is involved.
ABOUT THE AUTHOR: Sander van Vugt is an author and independent technical trainer, specializing in Linux since 1994. Vugt is also a technical consultant for high-availability (HA) clustering and performance optimization, as well as an expert on SLED 10 administration.