While server virtualization in the data center can greatly increase utilization and agility, it also presents new management and design challenges. One area that's often misunderstood is the network edge, or access layer, which is extended into the hypervisor in the form of a virtual switch, or vSwitch.
Overall, the functionality of the vSwitch is very similar to a physical switch. Each virtual host must be connected to a vSwitch the same way a physical host must be connected to a physical switch for IP traffic to move across the network. However, looking at the lower-level functionality of the software implementation, the differences become more apparent. This is especially true for organizations with higher consolidation ratios.
At its core, a Layer 2 switch is designed to examine the MAC address of a frame and move the data from one port to the other as quickly as possible. Simple frame segregation of a switch has been in use for years with VLAN tagging, which provides the ability to perform very basic isolation of traffic and broadcast domains. Within VMware Infrastructure, a vSwitch has all of the basic functionality found in any low-range physical switch.
For early adoption, or a more simplistic virtualized data center, the vSwitch functionality is acceptable. However, larger networks with more complex designs may be limited by the features in vSwitch, especially when it comes to troubleshooting a complex networking issue or deploying a virtual machine (VM) infrastructure that needs to span data centers or disaster recovery sites. This is also important when considering the number of Ethernet connections each vSphere server requires.
Another issue that arises from the network being extended into the ESX server is the actual management and configuration of the vSwitch. The skills required for an ESX administrator to install and configure an ESX host do not necessarily extend to a solid understanding of the data center network or IP networking in general. Conversely, network engineers may not grasp the concepts of virtualization and ESX management. This administrative gray area becomes even more troublesome since vSwitches are managed per ESX host, meaning that the configuration of each vSwitch needs to be manually configured on each and every ESX server. This leaves ample room for misconfiguration or errors implemented by a gap in product or IT knowledge. Companies should consider turning to outside expertise to augment IT staff knowledge around best practices for the network in a virtualized environment.
Making sense of the virtual network
So what does all this mean? Consider a typical physical environment: Each server has a dedicated network cable. If any single cable or switch port goes bad, only one server is down. In a virtual environment, that one cable could provide connectivity to 10 or more virtual machines. A single cable or switch port failure would mean loss of connectivity to multiple VMs. Resolving this type of failure can be difficult. Gone is the ability to walk into a data center and view all of your network connections and the individual lights on the switch.
A similar scenario plays out when it comes to evaluating bandwidth. A VMware server can sometimes host 24 running VMs. This server is going to require more bandwidth than a standalone server and accommodating this bandwidth has an impact on physical switches. One or two Gigabit Ethernet connections may no longer be sufficient, especially with newer technologies such as iSCSI. Many organizations using vSphere leverage six to eight physical network ports per vSphere server.
Plugging an ESX server into a standard "top-of-rack" switch can lead to problems. This top-of-rack switch is designed to provide connectivity for 24 systems. If four VMware servers, each running 24 virtual machines, are plugged into that one 24-port switch, essentially four times the number of systems for which that switch was designed have now been plugged into it.
These issues require rethinking and re-architecting the data center network from the ground up. The first step is to revise how we refer to our equipment. Physical servers running VMware ESX should no longer be considered "servers" in the traditional sense. They are now more like "access layer compute nodes" on which virtual servers run. At the same time, the virtual switches created within VMware servers should be considered top-of-rack, or access layer switches. Organizations should therefore plug VMware "access layer compute nodes" into Core or Distribution layer switches.
To help solve many of the network-related challenges associated with virtualization, VMware established a development and engineering relationship with Cisco Systems. Through a joint development effort, Cisco and VMware created two technologies to greatly increase the functionality of the vSwitch inside an ESX host. VMware developed the concept of a distributed virtual switch (DVS), a vSwitch that spans its ports and management across all ESX servers in the cluster. This solves one of the core management issues of the isolated vSwitch. Now, basic configuration details can be pushed across the cluster, helping to eliminate some of the common configuration errors that can arise in a virtualized data center.
In addition, Cisco developed the Nexus 1000V, a replacement vSwitch for ESX that gives the network back to the network operations team. It is a fully functional Cisco switch that runs the next-generation Cisco software called NX-OS. NX-OS is an evolution of SAN-OS that became fully realized on the MDS line of Fibre Channel switches. Inside the Nexus 1000V, network administrators will find a common look and feel that they are familiar with from IOS. The Nexus 1000V includes many of the most common features inside any Catalyst switch, such as ERSPAN, SSH, ACLs, L2/L3 awareness, TACAS/RADIUS, CLI and others. Additionally, new VMware-specific features have been added.
Connectivity type is another consideration to take into account. When implementing a virtualized data center, each vSphere server typically requires six to eight network ports. While the overall virtualization project will typically result in an aggressive return on investment for the customer, the networking component can be an expensive one. To address this, many customers compare the cost of six to eight 1 G connections against the cost of two 10 G connections, and many times find a cost savings. They also find simplicity in having two network connections per vSphere server.
The overwhelming benefits of virtualization are leading organizations to wider adoption of the technology. However, optimizing the advantages of a virtualized environment requires careful planning and insight into the far-reaching impacts of moving from physical to virtual.
ABOUT THE AUTHOR: Alex Weeks is Senior Solutions Architect for Kovarus. Over the last decade, he has grown his technical skills and professional services management background by engaging in all aspects of IT solution implementation for a variety of product offerings. In addition to honing his VMware expertise, Alex has served as the lead technician on many Linux installations. Alex has more than 15 years of industry experience holding various technical sales, systems engineer and consulting positions at leading companies, including MTI, Eastern Computer Exchange and IBM.
ABOUT KOVARUS: Kovarus Inc. is a premier technology consulting firm, specializing in data center design, implementation and optimization. With its extensive industry expertise and proven methodologies, Kovarus helps companies align IT with strategic business goals and maximize their return on technology investments. Headquartered in the San Francisco Bay Area, the company was founded in 2003 and has been chosen as a premier partner by leading technology companies including VMware, EMC and Cisco.
What did you think of this feature? Write to SearchDataCenter.com's Matt Stansberry about your data center concerns at firstname.lastname@example.org.