Tip

Fine-tune hypervisor settings to optimize network performance

A couple of key configuration settings exist in any hypervisor that can be used to optimize network performance. It’s about avoiding contention and oversubscription to the underlying resource.

First, it is important to create network interface card (NIC) teams for dedicated network traffic types. There are generally six traffic types in most hypervisors:

  • Management
  • IP-based storage (optional)
  • Live migration
  • High-availability heartbeat
  • Fault-tolerance (specific to VMware Inc.)
  • Virtual machines

In the ideal world, each traffic type should be serviced on a dedicated physical NIC. If redundancy is deemed a requirement for the traffic type in question, then that could easily be doubled. That is often a best practice or recommendation from the virtualization vendor. However, there is room to maneuver if your hardware doesn’t have a healthy number of NICs or the platform prohibits the configuration — for example, on older blades that support only two NICs per blade.

Requires Free Membership to View

Some administrators place their live-migrate traffic on their management network because historically this has been an underused chunk of bandwidth. Despite the use of virtual local area network (VLAN) tagging, you may want to separate this traffic for security compliance as well. The key here is to make sure the underlying network traffic that allows the hypervisor to function — management, IP-based storage and live-migrate traffic — does not affect the VMs. So storage and live-migrate traffic, which can be particularly bandwidth-intensive, should be on separate physical NICs.

You can achieve another area of vSwitch optimization by enabling jumbo frames, which has a much larger maximum transmission unit (MTU) size of up to 9,000 bytes on most modern networks. Increasing the MTU value from the standard 1,500 bytes to a larger value helps optimize network performance for all traffic types, whether they are virtual or physical. This happens because fewer frames are used to send the same volume of data. Because fewer frames are being sent, overhead is reduced, and that results in fewer TCP/IP acknowledgement packets.

Implementing a larger MTU value is not a task to be undertaken lightly. Every component — the physical switch, the vSwitch and the VM — will need to be correctly configured to achieve tangible benefits. In cases with secure communication using protocols like SSL, a poor configuration of the MTU value could stop communication altogether.

Setting maximum MTU

The maximum transmission unit (MTU) setting within the hypervisor is configurable in many different ways and locations. For instance, if you are using Standard vSwitches in VMware, you would use the command line tool “esxcfg-vswitch” to set the MTU value. If you are using VMware Distributed vSwitches(see Figure 1), you find it as a setting in the dvSwitch itself.


In Microsoft’s Hyper-V, the configuration is held on the properties of the local area connection or network team, and it is controlled essentially by the vendor of your network cards (see Figure 2).





Other options to optimize network performance include any settings you can enable to improve the algorithms associated with the use of teamed network cards. The most common use of teamed network cards is to offer redundancy to the network, especially if NICs are split across different Layer-2 switches.

However, many hypervisors adopt teaming policies by default, which are designed more for compatibility than for better optimization of the network layer. Most people assume that by teaming the NICs together, they will double or triple their available bandwidth, but this is not the case for many hypervisors.

A lot depends on the way the hypervisor was developed, and the default settings may indeed need modifying. For example, the default in VMware is to use a policy called Originating Port ID. This carries out a round-robin on the network cards in an effort to distribute traffic load.

Although this offers excellent compatibility in many different network environments, it is not the most optimized policy to adopt. Switching to a load-balancing policy that uses IP data is usually the best approach.

Optimizing physical switches

Care must be taken to confirm that the physical switches support the correct protocols and standards. For example, in the case of VMware’s IP Hash policy, the physical switch needs to be enabled for the IEEE 802.3 Link Aggregation feature. In the case of VMware, the configuration is carried out on the properties of the vSwitch (see Figure 3):




Microsoft Hyper-V (see Figure 4) is the function of your vendor’s network settings. The interesting thing here is that you may have more vendor-specific proprietary options from the NIC vendor. For example, in the case of Intel network cards, there is a specific option for Virtual Machine Load Balancing.




For the new release of VMware vSphere 4.1 and Citrix XenServer 5.0, you will find that modern hypervisors ship with some kind of Network I/O settings that allow you to control bandwidth as it leaves and enters the hypervisor. These controls are new, and it remains unclear how significant this development will be.

Some customers may prefer to handle the IOPS controls by using methods available outside of the physical server — and embedded in the new 10/20 Gbps hardware that allows the administrator to control bandwidth allocations independent of hypervisor version or vendor. Remember the configuration changes don’t necessarily double or triple your bandwidth available, but they should increase the overall I/O capabilities of the hypervisor in question.

About the author:
Mike Laverick (VCP) is an award-winning expert and author who has been involved with the VMware community since 2003. He is a VMware forum moderator and member of the London VMware User Group Steering Committee. Laverick is the owner and author of the virtualization website and blog RTFM Education, where he publishes free guides and utilities aimed at VMware ESX/VirtualCenter users.

This was first published in April 2011

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.