Although there is no denying the benefits of server virtualization, it’s possible for this technology to become too much of a good thing. IT managers have learned that the best way to make the most out of their server hardware budget is to seek the highest possible density -- packing virtual machines on each physical server. The problem is that this principle is often taken to the extreme, leading to over-consolidation that can actually...
threaten server performance and stability.
Looking for server over-consolidation
So how can you tell if your virtual machines have been over-consolidated? One way to find out is through the use of performance monitoring. Microsoft provides a number of metrics that Windows Server administrators can use to determine whether or not servers have been allocated sufficient resources. For example, Windows administrators might look at the Memory/Available Bytes counter to ensure that the server is not running short on memory. Although the Performance Monitor does not exist in Linux, there are plenty of utilities like vmstat, free and iostat that Linux admins can use to retrieve similar information. (Click here for descriptions of some of the Linux performance monitoring utilities.) Of course, performance monitoring can be tricky, and the results (especially for CPU consumption) can be skewed by the underlying hypervisor. Fortunately, there is a much easier way to tell whether or not you have over-consolidated your virtual machines.
It may be a bit subjective, but your servers should perform just as well in a virtual environment as you would expect them to perform if they were running on dedicated physical hardware. If your virtual servers do not meet this expectation, then you need to allocate more hardware resources to the virtual servers through the hypervisor. If there are no more resources to allocate, then you can be rest assured that over-consolidation has occurred, and you will need to take some corrective action, such as migrating VMs among different physical servers.
Resource allocation can stop server over-consolidation
Thankfully, there are several things that you can do to prevent over-consolidation in the first place. An important first step is to accept the radical notion that not all virtual machines are created equal, and there is no simple formula that can quickly determine the appropriate virtual machine density on any one physical server.
For example, I once received an e-mail in response to an article that I had written about resource planning. The sender told me that the official policy in his organization was to purchase servers with eight CPU cores and 16 GB of RAM, and that each of those servers would accommodate seven virtual machines. Even though a server like this could probably run seven virtual machines, it isn’t an absolute guarantee. As I said, not all virtual servers are created equally. Consider that a SQL Server tends to consume far more resources than a DHCP server, so the SQL Server would need to be allocated a greater percentage of the host server’s overall resources.
Let’s suppose that sender allocated server resources evenly. On a 16 GB server, IT would probably allocate 2 GB of memory to each virtual machine (saving 2 GB for the host operating system). Some virtual machines will do fine with 2 GB of memory. For instance, I would not expect any problems from a DNS server or a domain controller that had been allocated 2 GB of memory, but Microsoft won’t even support Exchange Server 2010 unless you allocate at least 4 GB of memory to it (8 GB if multiple server roles are installed).
My point is that virtual machines are not created equally, and you cannot treat them as if they are. Otherwise, you will have some servers that have been allocated more resources than they really need, while other virtual servers are starved for resources. Even if a host server has not been truly over-consolidated (and physical computing resources remain available on the server), the afflicted VM will behave as if it has been if resources are not allocated in an appropriate manner.
Ultimately, one of the keys to avoiding server over-consolidation is to be aware of each virtual machine’s hardware requirements, and to allocate hardware according to the needs of each VM. That is, you might provide your virtualized DNS servers with a little less memory so that you can give your Exchange or SQL servers the memory they really need.
VM distribution can stop server over-consolidation
Another key to avoiding server over-consolidation is to distribute virtual machines across host servers in a balanced manner. It’s not about balancing the number of virtual machines on each host, but rather balancing the demand for hardware resources. Earlier, I mentioned that SQL Server tends to be a high-demand application. That being the case, does it really make sense to put all of your virtualized SQL servers on one host server? It would almost always be better to put one or two SQL servers on each host, and then use any remaining host server resources to run low-impact virtual machines. That way, you don’t end up killing the server’s performance through over-consolidation.
Even though the techniques I’ve discussed are proven to work, they aren’t always practical. Suppose an organization has three host servers that have been determined to be hosting an excessive number of virtual servers. Let’s also pretend that offloading some of the virtual servers to another host isn’t an option because of budget constraints -- you just can’t afford to buy more physical servers at the moment.
In a situation like this, I would try to make each virtual machine run more efficiently. Disabling any unnecessary system services is a good start because it reduces resource consumption while also reducing the server’s attack surface. You should also look for any applications that can be removed from a server. Each server should have exactly the code that it needs to function -- nothing more, nothing less.
I would also recommend running the Performance Monitor against each virtual machine to determine how much of the server’s hardware resources are actually being used. You may find that you can improve performance by re-allocating a few resources, as you saw above. If there isn’t much wiggle room to re-allocate server resources, consider upgrading the physical server (if possible). Installing additional memory, faster CPUs, additional CPU cores and faster storage arrays are all ways to make your virtual machines perform better. Usually, adding memory or CPU cores will give you the most bang for your buck.
Combining redundant VMs can ease server over-consolidation
Sometimes you may find that the server hardware can’t be upgraded, and you can’t reconfigure your virtual machines to optimize the way that they perform. In these situations, the best course of action might be to combine a couple of virtual machines into one.
Suppose, for instance, that you have a virtualized DNS server and a virtualized domain controller. Neither of these servers are high demand, but they are consuming some resources. If you combine those two servers into one, you would be able to eliminate significant overhead. Think about it for a moment -- combining the two virtual machines eliminates one of the operating systems that your server has to run. It also eliminates any support applications that were running on that operating system. This may include things like antivirus software or a backup agent. Combining two low-demand virtual machines into one will usually reduce the demand on the host server’s CPU resources and should also free up some memory that can be re-allocated to more demanding VMs.
The push to drive up server utilization has spawned a host of new problems as more VMs vie for limited computing resources on a physical server. Fortunately, examining the way that computing resources are assigned, considering the way that VMs are spread across servers, and combining redundant or complementary VMs, can ease much of the pain for busy IT administrators.
Dig deeper on Configuration and change management tools