Today, more organizations are moving server virtualization technologies into production. It makes sense, especially during these tough economic times, because server virtualization promises vast savings in server hardware, rack space, power consumption and cooling.
But in order to move to server virtualization, many organizations have discovered that they must acquire new physical server technologies. That's because the latest hypervisors rely on x64 hardware with hardware-assisted virtualization baked into the server processor. In a way, this new hardware acquisition also makes sense because it comes with reduced power consumption and reduced cooling requirements, which supports the effort to reduce ongoing costs in data centers.
The problem is that too many organizations are content with the default 10:1 ratio they obtain through physical server consolidation. Yet you can increase this ratio further and reduce the number of required host servers by focusing on the process of virtual machine (VM) placement on host servers. To do this and still obtain top-level performance from hosts and virtual machines, you have to roll up your sleeves and do some hard work. After all, even if you want to improve physical resource utilization, you'll still want to make sure all systems perform at their best.
So now is the time to take a close look at the virtual service offerings you will be running. These are the services that your end users interact with that are now running in VMs. The next step is to develop some resource allocation rules.
You'll also need to understand the features of your server virtualization platform in depth so you'll be able to move from conservative to aggressive VM placement and reap the benefits. Begin by categorizing your existing services and applications.
Begin with service and application categorization
The best way to change your VM placement approach is to start categorizing the server workloads you intend to run in your virtual service offerings. Try to keep the number of categories to a minimum and group services by resource requirement.
For example, Web servers require mostly network resources and so do e-mail transport servers; database servers are focused on storage requirements; middleware servers are focused on processing requirements; and so on. These are the systems you'll be running in production, but you'll also need test systems, training systems and, possibly, development systems.
In addition, network server placement comes into play. Placement refers to the architectural proximity or position of the server in an end-to-end distributed system. Three positions are possible:
- Inside the intranet
- In the security perimeter -- often referred to as the demilitarized zone (DMZ), though for large organizations, the perimeter often includes more than just a DMZ
- Outside the enterprise
Server placement tends to blur when workloads are virtualized. VMs on a physical host may be in one zone while others on the same host are in another, and everything is segregated through the virtual networking features of the host servers. Despite this, make sure you keep server placement in mind when you position the virtual servers for each zone in your network.
If you plan well, you should be able to virtualize 100% of your workloads. Today, virtualization infrastructures can run almost any workload. A VMware Inc. survey conducted in July 2007 found that out of 361 of its polled customers, most of them ran advanced workloads on their virtualization engine (see Figure 1).
More organizations are running Microsoft SQL Server, Microsoft Exchange Server and other complex workloads in virtual machines with performance levels that exceed the expectations compared with operation on physical machines.
Figure 1: Common virtual workloads among VMware customers
There should be few reasons why you cannot virtualize a service offering. For example, you may decide to continue running some service offerings on older 32-bit hardware just because you're not ready to move off of those systems.
But consider the advantages of running virtual machines. Because they are virtual, you can load them on any hardware system. In addition, they are easy to deploy and protect -- just copy the disk files to another location.
If you do find that you need to maintain older hardware and run it in your data center, why not make them host systems anyway? It's true that Microsoft Hyper-V or Citrix XenServer do not run on 32-bit platforms, but you can still rely on tools such as Microsoft Virtual Server, Sun xVM VirtualBox or VMware Server. All of those products will run on 32-bit systems, giving older servers more life as physical hosts instead of delivering service offerings to users.
That way, all of your physical machines -- 32-bit and 64-bit -- are hosts and all of your service offerings are virtualized. Remember, however, that hypervisors running on 32-bit systems will not support 64-bit machines. Remember, too, that all modern operating systems and application vendors are moving to x64 platforms, which means 32-bit hardware has a very short lifespan ahead of it. Spending time and energy on setting up these older resources may simply be a waste of time compared to the results.
Develop virtual resource allocation rules
Once you have categorized the various server workloads in your data center, think about how you will assign them to hosts. If a host server is going to run up to 20 VMs, the VMs cannot or should not contend for the same resources at the same time.
To optimize VM placement, you must look at the workloads and identify which processes and resources they need and when they require them. For example, if you're running Windows services in your network, you can expect them to behave as follows (see Figure 2):
- Domain controllers require network and processor resources at peak times, such as early morning or after lunch.
- File and print servers require processor and network resources at off-peak times, such as mid-morning or mid-afternoon.
- Web servers focus on network resources and, if properly constructed, will require a steady stream of resources.
- SQL Server and Exchange Server each require steady resources throughout the day and focus mostly on disk and processor resources.
- Corporate applications often have scheduled resource requirements. For example, a payroll application will run on bimonthly or biweekly schedules.
- Test and development systems are often used during off-hours or have extremely variable workloads.
- Training systems run during the day but usually have a low ongoing demand on resources.
Not all workloads are busy at all times. In fact, some workloads are "parked" or "idling" and rarely run. These are good candidates for rationalization -- the reduction of the number of workloads you actually run.
Figure 2. Comparing server resource requirements for different workloads over time
Because server workloads require different resources at different times, configure your workloads so they do not compete for the same physical resources -- CPU, RAM, network or storage -- at the same time on the same host. You'll want to configure heterogeneous virtual workloads as much as possible (see Figure 3) and avoid configuring homogeneous workloads on host servers. This means one host server could run a DC, a network infrastructure server, a file server, one or more Web servers and perhaps even a corporate application. The key is to focus on workloads that require different resources at different times.
Figure 3. Assigning heterogeneous workloads to host servers
Work with the hypervisor network layer
Hypervisors let you provision multiple virtual networks on each host server. Three network types can be provisioned: public, private and host-only. VMs can use these different networks in a variety of ways and, because of this, you should provision each network within the host systems you prepare.
Each VM that is designed to provide service offerings to end users must do so through a public network interface. Therefore, these VMs must have at least one connection to the public virtual network interfaces you create on the host servers.
Public networks also allow VMs that are located on different hosts to communicate with each other. It is a good practice to create a number of public network paths on each host to provide redundancy in the services.
Host-only virtual network adapters are used to support inter-VM and VM-to-host communications. If you need to update a VM, you can do it through the host-only communication channel, avoiding additional traffic on the public interfaces. Host-only communications are performed directly from the host to the VMs and use the host's internal communication channels to operate.
Private virtual network adapters can also be used to reduce traffic on the public interface. When you need to completely isolate VM traffic, link the VMs to a private network adapter. This allows inter-VM communications but will not support communications with any other device, including the host server. You might want to use the private network to support management communications on perimeter network systems to protect them from any other type of communication.
As a best practice, you should provision each host with each network type and then link various VM components to the appropriate virtual network interface card. In addition, you can rely on these networks to segregate and isolate VMs from each other even though they run on the same hosts.
Change from conservative to aggressive VM placement
Most organizations are content with conservative VM placement on host servers. These organizations achieve remarkable results by simply relying on the physical server consolidation process -- moving physical servers into VMs. If they use homogeneous VM placement, then the consolidation results will be relatively poor. But when they use heterogeneous VM placement, results improve significantly and can easily support a 10:1 consolidation ratio.
But organizations that reach upwards of 15:1 consolidation ratios are the ones that choose to perform aggressive VM placement. They will perform bold moves, such as intermingling network server placement with VM placement and intermingling environments with VM placement. The results can be astounding because each host server can run many more VMs, reducing the cost of your host server infrastructure or at least increasing the return on investment for each host server. In these tough economic times, who can afford anything less?
By relying on the various network interfaces that are available on a hypervisor host, you can place VMs from different network zones on the same host and have them run in completely separate contexts. And placing VMs from different zones on the same host lets you increase the density of your VM placements.
The various network interfaces on your hosts will also let you co-position virtual machines from different environments on the same hosts. That means you can place VMs for training, testing, development and production on the same host and segregate them by attaching them to different virtual network adapters.
It's also possible to assign throttle policies to the testing, training and development VMs to ensure that they do not take away resources from the production VMs. Automate this entire process and use spare hosts so that production VMs that need additional resources during peaks can be moved off to a spare host when necessary. Everything then proceeds automatically. The result is a host server running a much denser VM workload (see Figure 4).Figure 4. Increasing host server density through aggressive VM placement
There is a lot to learn about server virtualization. You'll quickly discover that it is a powerful operational model and that the return on investment is extremely worthwhile. In addition, your administrative workload will eventually decrease. There is so much you can do with a VM that you just can't do with a physical server.
To perform aggressive VM placement, keep these tips in mind:
- Standardize your host servers as much as possible. Choose servers that will meet and grow with the needs of your VMs.
- Identify and categorize your virtual workloads to determine which resources are required by each workload.
- Create several virtual network adapters on host servers for both redundancy and for VM segregation.
- Configure your VMs with appropriate policies, providing all required resources for production VMs and capping resources for non-production environments.
- Assign VM placement according to heterogeneous placement rules.
- Assign VM placement using aggressive placement to maximize VM density on host servers.
- Maintain spare host servers to support peak workloads, and configure your VM movement policies to support production VMs only.
This approach will let you maximize the number of virtual machines your host servers will run and improve your physical server consolidation ratios.
When you move toward aggressive VM placement, pay close attention to licensing. VMs require fewer licenses than physical machines, which is one reason it's so rewarding to move to a dynamic data center. VMs are also easy to generate. Keep this in mind whenever you create a new VM.
ABOUT THE AUTHORS: Danielle Ruest and Nelson Ruest are IT experts focused on continuous service availability and infrastructure optimization. They are authors of multiple books, including Virtualization: A Beginner's Guide and Windows Server 2008, The Complete Reference (McGraw-Hill Osborne) as well as the MCITP Self-Paced Training Kit (Exam 70-238): Deploying Messaging Solutions with Microsoft® Exchange Server 2007 (Microsoft Press). Their upcoming book will be a training kit for Microsoft exam 70-652: Configuring Windows Server Virtualization with Hyper-V. Contact them at email@example.com.
- The Second-Wind of Private Cloud –Focus Technology Solutions
- Private Cloud Migration Success Guide –TechTarget
- Focus: Private cloud –ComputerWeekly.com
- Why Physical Networks Don't Cut It for the Private Cloud –SearchSecurity.com