This article can also be found in the Premium Editorial Download "Virtual Data Center: Achieving the right level of consolidation in a virtual data center."
Download it now to read this article plus other related content.
As businesses move up into the next generation of virtualization, consolidation ratios will rise, making virtualization management more important. Adding more virtual machines (VMs) to servers can easily stress the underlying physical hardware resources and potentially affect network connectivity.
A number of factors contribute to this situation. First, hardware vendors are increasing the amount of supported physical RAM in the host. In turn, hypervisor vendors are updating their software to address this memory.
Additionally, as businesses embrace the virtual desktop approach, they will begin to see a greater density of VMs per physical box because virtual desktops are generally less memory-hungry than server-based VMs. Memory has typically been the biggest single constraint in achieving increased consolidation ratios. That hurdle is slowly drifting away, and concern is turning to the other areas that could be performance bottlenecks — namely the IOPS generated by network traffic.
Virtual machine optimization
The key to virtualization managementis making sure your network layer configuration doesn’t generate unnecessary contention, which is the general term that many virtualization vendors use to describe a situation where any critical resource becomes oversubscribed. And, there’s little to be gained from two network-intensive VMs being placed on the same hypervisor using the same network interface cards. This would potentially create contention as they compete for an underlying resource, such as the CPU or memory.
In some cases, however, such a configuration would be appropriate. For example, if there are two VMs that communicate with each other frequently and transfer large amounts of data, it might be the case that they should reside on the same host.
When two VMs speak to each other on the same vSwitch and on the same host, the physical data layer is not touched at all. All network communication happens within the hypervisor.
In this case, you would not be throttled by the speed of your network but rather by the speed of the physical host's CPU and bus. A good example of this scenario would be the communications between a front-end Web server and a back-end database.
So as you can see, the simple adage of “avoid contention” works in most cases, but true network optimization requires that you understand the relationships between your VMs as well. Most virtualization management software allows you to express these relationships in the form of affinity and anti-affinity rules (see Figure 1).
So you can create rules that say, for example, that a database and Web server must reside on the same host, but that each of your Microsoft Active Directory VMs must never reside on the same host. This allows you to maintain the application scalability and availability that most vendors achieve by scaling out—as opposed to scaling up—their technologies.
The next virtualization management step in optimizing your VMs for network connectivity is to configure the guest operating system (OS) that resides within the VM. Most virtualization vendors allow you to configure the VM with an optimized network driver that sits inside the VM.
For example, VMware has the vmxnet2 and vmxnet3 driver sets, whereas Microsoft Hyper-V has the concept of a synthetic network device (see Figure 2). For these to work, you have to install the integration software that ships with the virtualization vendor.
Enhanced network drivers reduce the number of CPU cycles required on the physical machine to move network packets from the virtual world to the physical world. Without them, you would probably see reduced network performance and an increase in CPU use on the physical server.
These drivers offer a discrete amount of paravirtualization to the guest OS, making the guest OS more VM-aware than it would be without them. Paravirtualization is a general attempt to make any part of the system more tuned to a virtualized environment.
Additionally, these drivers are often required to leverage more advanced network enhancements. So, for example, if you want to offer larger maximum transmission unit (MTU) sizes to the guest OS, the installation of their enhanced network driver type will often be a requirement.
Remember: On its own, virtualization doesn’t make your guest OS run any quicker. If a VM outperforms a physical server, it will be largely because of external forces, such as having a faster physical server, more resources to the VM than the OS previously endured in the physical world or a storage layer that has been upgraded.
The optimization you learned in the physical world with Windows or Linux still applies to the virtual world. So if you learned how to modify the Windows registry to tweak the performance of TCP/IP, those optimizations are still likely to pay dividends. The same applies to other tweaks such as changing the firewall settings and disabling unwanted services that might generate an unnecessary network load that is not needed for your application to function.
About the author: Mike Laverick (VCP) has been involved with the VMware community since 2003. Laverick is a VMware forum moderator and member of the London VMware User Group Steering Committee. Laverick is the owner and author of the virtualization website and blog RTFM Education, where he publishes free guides and utilities aimed at VMware ESX/VirtualCenter users, and has recently joined SearchVMware.com as an Editor at Large. In 2009, Laverick received the VMware vExpert award and helped found the Irish and Scottish VMware user groups. Laverick has had books published on VMware Virtual Infrastructure 3, VMware vSphere4 and VMware Site Recovery Manager.
This was first published in April 2011