With all of the buzzwords in IT, it’s easy to dismiss network virtualization as merely a fad or a byproduct of the IT industry’s push to virtualize everything. However, there are at least three benefits to network virtualization and several best practices that are worth adhering to. Let’s review those important advantages and consider some best practices for network virtualization.
Network virtualization advantages
The first advantage is that network virtualization can help make better use of network resources. In some cases, this means increasing the utilization of a physical resource. For example, multiple virtual servers might be tied to a single physical network interface card (NIC). A single virtual machine (VM) might not be able to fully utilize the capacity of a Gigabit Ethernet (GbE), or a 10 GbE NIC, but when the NIC is shared among multiple VMs its capacity is more fully utilized.
Network virtualization can also be used to offload traffic from overused resources. For example, an IT professional could create a virtual backbone that exists solely as a logical structure to connect VMs that reside on a common host. Doing so offloads backbone traffic from physical switches. For example, suppose two applications exchanged a large amount of data, such as a database and a query system. An IT professional might convert those two applications to VMs, host them on the same virtual server machine, and create a virtual network within the server to
Of course, this same concept can also be used to free up network hardware. In some cases, the use of virtual network hardware can help to free up ports on physical network switches.
A second advantage to network virtualization is isolation. In some cases, it may be desirable to isolate certain protocols to a dedicated network segment for the sake of security. There are a number of protocols, or traffic types, that could be isolated. For instance, an IT professional could isolate server backbone traffic. Another option might be to isolate HTTP traffic to avoid exposing it to other types of packets. Virtualization allows such isolation without the hassles of deploying dedicated physical segments.
The third advantage to network virtualization is port aggregation. I already talked about a situation where multiple VMs use a single NIC. That’s a form of aggregation that can be used to increase utilization. However, aggregation can also be used to increase capacity. For example, multiple physical NICs can be bonded together as a single logical NIC as a way of increasing the amount of available bandwidth beyond what a single NIC could provide.
Network virtualization best practices
Network virtualization adds layers of abstraction and some additional complexity to a network, so most network virtualization vendors provide a set of best practices for the deployment process. These best practices tend to be product-specific, but there is one best practice that IT professionals should adhere to, regardless of the network virtualization product they are using. Know the network’s performance!
When creating an external virtual network (a virtual network that uses physical networking components), it is critically important to take baseline readings of the network performance both before and after setting up the virtual network. Performance measurements should be taken immediately and over time.
Performance monitoring is important because when a professional creates a logical virtual network structure, it is sometimes possible to affect seemingly unrelated networking components. For example, a professional might discover that the changes he is making are overloading the Internet connection. I have also seen situations in which network virtualization overloaded, or saturated, physical routers and caused them to begin dropping packets. Monitoring is the only reliable way to ensure that network resources are being properly utilized without becoming saturated.
Next, ensure that the network monitoring software is virtual-network aware. Network monitoring software typically resides on a network server and uses the server’s NIC to issue ping and Simple Network Management Protocol requests to network devices. Such software usually performs packet sniffing as well. Unless the software is virtualization aware, however, there is a good chance that it may not be able to monitor parallel packet streams related to virtual networks. After all, one of the main reasons for creating a virtual network is to isolate network traffic, but the behavior of inadequately designed reporting tools may skew results unexpectedly if the tools do not account for virtualization.
Even if the network monitoring software is able to see virtual network traffic, only virtualization-aware monitoring tools will show the big picture. For example, network monitoring software that is not primed for virtualization may fail to display virtual network segments, or it may display such segments, but fail to identify them as being virtual. The actual outcome depends heavily on the monitoring software that is used, as well as on the type of network virtualization in place.
This occurs because network components, such as routers, typically function at three different layers: the hardware layer, the software layer and the control framework layer. The layer at which the virtualization solution is implemented greatly affects the virtual network’s visibility to management software. For example, if a software update allowed a router to natively support virtual networks, then the router would most likely hide all virtual network-related traffic from any management tool that was not specifically designed to monitor virtual networks.
Network virtualization can offer some tremendous benefits. However, it is critically important to adhere to any recommended best practices when deploying network virtualization to achieve best results and avoid adversely affecting network performance.
ABOUT THE AUTHOR: Brien M. Posey has received Microsoft’s Most Valuable Professional award six times for his work with Windows Server, IIS, file systems/storage and Exchange Server. He has served as CIO for a nationwide chain of hospitals and healthcare facilities and was once a network administrator for Fort Knox.
This was first published in June 2011