A couple of key configuration settings exist in any hypervisor that can be used to optimize network performance. It’s about avoiding contention and oversubscription to the underlying resource.
First, it is important to create network interface card (NIC) teams for dedicated network traffic types. There are generally six traffic types in most hypervisors:
- IP-based storage (optional)
- Live migration
- High-availability heartbeat
- Fault-tolerance (specific to VMware Inc.)
- Virtual machines
In the ideal world, each traffic type should be serviced on a dedicated physical NIC. If redundancy is deemed a requirement for the traffic type in question, then that could easily be doubled. That is often a best practice or recommendation from the virtualization vendor. However, there is room to maneuver if your hardware doesn’t have a healthy number of NICs or the platform prohibits the configuration — for example, on older blades that support only two NICs per blade.
Some administrators place their live-migrate traffic on their management network because historically this has been an underused chunk of bandwidth. Despite the use of virtual local area network (VLAN) tagging, you may want to separate this traffic for security compliance as well. The key here is to make sure the underlying network traffic that allows the hypervisor to function — management, IP-based storage and live-migrate traffic — does not affect the VMs. So storage and live-migrate traffic, which can be particularly bandwidth-intensive, should be on separate physical NICs.
You can achieve another area of vSwitch optimization by enabling jumbo frames, which has a much larger maximum transmission unit (MTU) size of up to 9,000 bytes on most modern networks. Increasing the MTU value from the standard 1,500 bytes to a larger value helps optimize network performance for all traffic types, whether they are virtual or physical. This happens because fewer frames are used to send the same volume of data. Because fewer frames are being sent, overhead is reduced, and that results in fewer TCP/IP acknowledgement packets.
Implementing a larger MTU value is not a task to be undertaken lightly. Every component — the physical switch, the vSwitch and the VM — will need to be correctly configured to achieve tangible benefits. In cases with secure communication using protocols like SSL, a poor configuration of the MTU value could stop communication altogether.
Setting maximum MTU
The maximum transmission unit (MTU) setting within the hypervisor is configurable in many different ways and locations. For instance, if you are using Standard vSwitches in VMware, you would use the command line tool “esxcfg-vswitch” to set the MTU value. If you are using VMware Distributed vSwitches(see Figure 1), you find it as a setting in the dvSwitch itself.
In Microsoft’s Hyper-V, the configuration is held on the properties of the local area connection or network team, and it is controlled essentially by the vendor of your network cards (see Figure 2).
Other options to optimize network performance include any settings you can enable to improve the algorithms associated with the use of teamed network cards. The most common use of teamed network cards is to offer redundancy to the network, especially if NICs are split across different Layer-2 switches.
However, many hypervisors adopt teaming policies by default, which are designed more for compatibility than for better optimization of the network layer. Most people assume that by teaming the NICs together, they will double or triple their available bandwidth, but this is not the case for many hypervisors.
A lot depends on the way the hypervisor was developed, and the default settings may indeed need modifying. For example, the default in VMware is to use a policy called Originating Port ID. This carries out a round-robin on the network cards in an effort to distribute traffic load.
Although this offers excellent compatibility in many different network environments, it is not the most optimized policy to adopt. Switching to a load-balancing policy that uses IP data is usually the best approach.
Optimizing physical switches
Care must be taken to confirm that the physical switches support the correct protocols and standards. For example, in the case of VMware’s IP Hash policy, the physical switch needs to be enabled for the IEEE 802.3 Link Aggregation feature. In the case of VMware, the configuration is carried out on the properties of the vSwitch (see Figure 3):
Microsoft Hyper-V (see Figure 4) is the function of your vendor’s network settings. The interesting thing here is that you may have more vendor-specific proprietary options from the NIC vendor. For example, in the case of Intel network cards, there is a specific option for Virtual Machine Load Balancing.
For the new release of VMware vSphere 4.1 and Citrix XenServer 5.0, you will find that modern hypervisors ship with some kind of Network I/O settings that allow you to control bandwidth as it leaves and enters the hypervisor. These controls are new, and it remains unclear how significant this development will be.
Some customers may prefer to handle the IOPS controls by using methods available outside of the physical server — and embedded in the new 10/20 Gbps hardware that allows the administrator to control bandwidth allocations independent of hypervisor version or vendor. Remember the configuration changes don’t necessarily double or triple your bandwidth available, but they should increase the overall I/O capabilities of the hypervisor in question.