Incorporating I/O virtualization into network architecture

Virtualized I/O network architecture can help reduce hardware costs and streamline network management.

This is the second tip in a two-part series on network infrastructure planning. In the first tip you can read more about network infrastructure planning for a virtual environment.

There are many options for IT administrators to consider when looking at a network interface setup. The start is simple enough–you have a physical server with two network interface cards (NICs). Now what? Depending on the size of the environment, engineers then have to worry about application load, switch capabilities, virtual local area networks (VLANs), workload traffic and general user traffic loads. In this tip, I'll discuss how I/O virtualization can be incorporated into network architecture to help reduce hardware costs.

In the virtualized network infrastructure, I/O virtualization addresses issues related to network and interface capacity. Instead of having multiple cards and cables per server, I/O virtualization employs a single high-speed I/O link for each physical machine. The beauty here is that the high-speed I/O link is logically managed as multiple virtual resources. Much like having multiple virtual machines (VMs) running on a single physical host, virtual I/O lets administrators create multiple virtual NICs (for network connectivity) and virtual host bus adapters (HBAs) (for Fibre Channel storage). These virtualized I/O cards operate exactly as the physical Ethernet and Fibre Channel hardware they are replacing. The key point is that because these virtual NICs and HBAs remain logically distinct, they create network and storage connections that also remain logically distinct.

There are several benefits of utilizing I/O virtualization in a data center:

  • Fewer I/O cards per server: The beauty of virtualization is that we are eliminating the physical layer. Unlike physical NICs and Fibre Channel HBAs, virtual NICs and HBAs are dynamically created and presented to VMs without needing to reboot the underlying server. This means more uptime for the physical host.
  • Less cabling: As IT engineers, we have all had to deal with cabling issues in one way or another. Whether there is a direct need to trace a faulty cable to a component or replace a remote server’s secondary link, working with cabling can be daunting. So why not combine some functional networking tasks? Merging storage and network traffic increases the utilization of a given link, which ultimately reduces costs and simplifies the infrastructure. Since each physical I/O link can support and handle all the traffic the server can theoretically deliver, multiple cables are no longer needed.
  • Improved data center economics: Hardware costs money. Cards, cables and server peripherals can eat up a budget very quickly. By using virtual I/O, an environment can have the needed NICs and HBAs that can then be deployed in a smaller server package, thus saving space, costs and power. Blade systems also benefit from the unrestricted connectivity that effectively eliminates the limitations on port count found in some systems.
  • Easier interface management: Virtualization platform graphical user interfaces, such as those from XenServer, vSphere and Hyper-V, now come with the ability to granularly manage an environment’s network architecture. The ability for a junior- or mid-level engineer to look at the virtual NICs and make live changes without harming the environment is a big step above worrying about unplugging physical cabling.

As virtualization continues to expand within data centers, finding ways to improve network I/O will become a constant battle. With that, utilizing onboard virtual I/O technologies will help meet the needs of a growing virtual infrastructure.

Tools, tips and best practices
As mentioned earlier, every environment is unique, so every environment will have its own set of requirements for network interface capacity. Change is coming to all aspects of network architecture, so to better understand the needs of an environment and plan carefully, consider these steps:

  1. Gather performance metrics: This is fairly self-explanatory. The most basic way to gather performance metrics is to use tools included with your operating system. If you gather networking, storage and application metrics over a month, you should get a good picture of peaks and valleys. If the basic tools aren't quite enough, hire a consultant to do the initial metrics-gathering.
  2. Use tools to help gather data: There are software packages available that can help gather network metrics better than embedded operating system tools. Third-party software also has the potential to gather more specific data such as who is using the network and how, and some even let you see traffic from specific network ports and IP addresses.
  3. Know the network architecture and understand interface interconnectivity: Too often, a company will purchase amazingly expensive network switches only to find that its servers or cabling environment won’t benefit much from the new hardware. By knowing the environment, administrators can get an idea of what they need to buy. Often, a medium-sized infrastructure will run perfectly fine on a gigabit network and have very little performance gains if other internal components are upgraded. Creating a network study can show what is being used as far as internal bandwidth and if there really is a problem.
  4. Consider ROI: Even if there are benefits to moving to a 10 GbE environment or replacing cards on a server, make sure the return on investment is there to back it up. Monetary expenditures that outweigh tangible infrastructure gains will be seen as a waste of money to seasoned IT managers. Sometimes it’s just not worth it, so know what the environment requires before spending the money.

Technology is an ever-changing entity. Technological success will eventually be determined by the user experience that it delivers. Good IT managers must always be ready to take advantage of both existing and new technologies within a network infrastructure to scale their environments as their computing needs continue to grow.

About the expert: Bill Kleyman, MBA, MISM, is an avid technologist with experience in network infrastructure management. His engineering work includes large virtualization deployments as well as business network design and implementation. Currently, he is the Director of Technology at World Wide Fittings Inc., a global manufacturing firm with locations in China, Europe and the United States.

This was first published in July 2011

Dig deeper on Data center server virtualization

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close