This is the first tip in a two-part series on network infrastructure planning. In the second tip you can read more...
about incorporating I/O virtualization into network architecture.
When physical hosts run multiple workloads, servers require more network bandwidth, along with more network connections to handle the increased data flow. Because network resources are shared among multiple applications, ensuring adequate network resources for critical applications becomes more complex and can profoundly affect the performance and availability of critical apps. This tip helps administrators understand the networking needs of a virtual environment and build a network infrastructure that improves performance and saves money.
Understanding network planning
The most amazing aspect of IT is its constant push for new innovations and, in the process, the creation of new challenges for data center managers. Server technology is progressing in line with network capabilities. Our ability to transfer data over a network is directly related to the abilities of the transferring devices. That is, just because a 10 Gigabit Ethernet (GbE) device is available, does not always mean the performance will be there. A server capable of handling this type of throughput is necessary, as is a current wiring infrastructure. Thus, planning is an important part of network infrastructure upgrades or expansions.
Each plan must be environment specific. If the situation calls for 10 physical hosts with 100 virtual machines (VMs) running in a pooled environment, then the engineer will need to understand that the network infrastructure is going to become complex. However, if the environment calls for two physical hosts running only four VMs, the environment won’t be nearly as complicated.
The best approach to setting a good baseline is to monitor current workloads and use those performance results as a reference. We’ll go into incorporating metric tools a bit later. Without testing, there is no objective way to gauge the performance of the network–guessing is simply not productive. When guessing comes into play, it can often cost more in mistakes or overspending than if the time was taken to plan out a feasible deployment.
Virtual vs. physical machines
From a technological perspective, there are key differences between a VM and a physical box. However, from a network traffic viewpoint, we can see a lot of similarities. Whether the workload is physical or virtual, traffic flow generated from both will have distinct patterns that IT engineers can plan around.
In traditional physical server environments, issues revolving around resource management are resolved by doing something called resource isolation. That means that a host will run only one workload, and each workload is provided with dedicated I/O hardware resources. This ensures security as well as performance, since these physical servers are only connected to the needed networks. These physically distinct networks isolate devices from intrusion threats, denial-of-service attacks and application failures on other hosts.
This physical host model changed a bit with the introduction of server virtualization. With virtualization, IT managers create a flexible pool of resources that can be deployed as needed. Any server can ideally run any application, which means that a single server now requires sufficient connectivity for all of the applications it hosts. This helps with redundancy, failover, disaster recovery and many other aspects of business continuity that traditional physical server environments had trouble providing.
Further increasing connectivity needs, virtual servers demand dedicated networks for management, like IPMI and management VLANs, and often require external storage connections as well, such as a dedicated port for iSCSI or Fibre Channel over Ethernet, or a specific storage area network (SAN) port for Fibre Channel. For VMs to be mobile amongst other physical hosts, a good SAN must be in place. This requires more connectivity and interface planning.
Planning the environment
The most important tip that can be shared is this: Do not go out and buy the most expensive piece of networking hardware just because you think it will help.
A 10 GbE-capable HP ProCurve Switch can cost several thousand dollars and may not even be needed in the environment. Additional switch modules can also substantially rack up the costs. The first thing that needs to be done is a core bandwidth analysis of the network. After that is completed, a look at future planning must be done as well. By understanding current needs and forecasting for the future, an IT manager can build out his or her network infrastructure without blowing the budget.
More importantly, with careful planning, the network capacity will work well and have room for growth. Remember, just because the switch is high-end doesn’t mean it will improve performance. One of the most overlooked elements in network planning is wiring. Examining the cabling setup can very quickly determine the type of interface switching the environment can handle. Simple iteration upgrades, such as moving from CAT 5E to CAT 6E, can lead to huge infrastructure improvements. In fact, industry analysts predict that 80%-90% of all new installations will be cabled with CAT 6E. Being backwards compatible, applications that worked over CAT 5E will work over CAT 6.
Note: Ripping out all of your old cabling and putting in the latest CAT 6E wiring will not give you immediate gigabit capabilities. Unless every single component in a network environment is gigabit-ready, no network infrastructure will ever be “true gigabit.”
About the expert: Bill Kleyman, MBA, MISM, is an avid technologist with experience in network infrastructure management. His engineering work includes large virtualization deployments as well as business network design and implementation. Currently, he is the Director of Technology at World Wide Fittings Inc., a global manufacturing firm with locations in China, Europe and the United States.