Step-by-step virtualization adoption: Capacity planning

Not every service can be virtualized and that some services require special attention in order for virtualization to work.

After you have decided which machines to virtualize, it's time to move to the next, most critical phase of the

entire project, which is capacity planning. That will be the focus of this article.

What is capacity planning?

What does capacity planning mean, and why is it the most delicate phase in a virtualization adoption project? In this phase, we evaluate the distribution of virtualized physical machines inside physical hosts and take inventory of their resources, including the processor type, memory size, mass storage dimension and type, redundant motherboard architecture and so on.

These physical machines have to contain planned virtual machines and have to survive severe faults. In addition, depending on the project's requirements, they have to scale up easily.

In a medium-complexity project, the chosen hardware includes more than just physical servers; it also comprises one or more storage devices, networking devices, network cards and cabling. Every piece has to be chosen carefully and not merely for performance needs. Our hardware decisions affect the next phase when the return on investment (ROI) will be calculated and we determine whether or not the project is worthwhile.

Calculating virtual machines per core

One critical value in hardware sizing is the virtual machines per core (VM/core) ratio.

Every virtualization platform has an expected average performance level, which is influenced by several factors that are independent from the chosen hardware -- from optimization of the virtualization engine to the expected load that the virtual machine will have to handle. The number of virtual machines that a single core (or a single CPU, in the case of single core processors) can reasonably support depends on these factors. So, a VMware ESX Server 3.0 and a Microsoft Virtual Server 2005 R2 can have a completely different VM/core ratio even on the same host.

Why is this value is so vague? The number of possibly influential factors is so large that it's quite hard to state a definitive ratio for a single product. Even virtualization vendors can barely provide an indication. For example, VMware Inc. states that its ESX Server is able to handle up to eight virtual machines per core, while its VMware Server (formerly GSX Server) can handle up to four virtual machines. But the numbers can be much higher or much lower depending on factors like hosted application technology (a legacy accounting software written in COBOL is not what we call something efficient) or I/O loads. Even though the value is so uncertain, it's still the most critical point in a virtualization project, and it's mandatory for a product comparison. Sometimes, however, that's impossible; at the time of this writing, Microsoft has still not provided a suggested VM/core ratio for its Virtual Server 2005 R2.

Going beyond mere VM/core calculation, it's mandatory to remember that we are not going to virtualize a physical server but rather one or more server roles. Therefore, trying to size a virtual machine in the same way as the actual physical server is not the best solution.

How to consolidate virtual machines

Given more than one host machine, a typical erroneous approach is to consolidate virtual machines with the same logic as their physical locations, i.e., all production machines have to be virtualized in the first host, all development machines have to be virtualized in the second one and so on. This error mainly depends on two factors: a natural desire to maintain what is considered a logical order, and a typical cultural bias, wherein the physical location strictly relates to contained services (a way of thinking we'll lose progressively along our evolution to grid computing).

This approach usually leads to bad consolidation ratios, and architects trying to cram several production virtual machines into the same host will find their machines overloaded by the huge weight of exercise services. Meanwhile, another host serving under-utilized virtual machines will waste most of its computing time.

The big challenge of capacity planning is finding complementary services to balance virtual machine allocation. This operation has to consider several service factors, including expected workloads during all hours of a day, the kind of physical resource requested, inclination to have very dynamic fluctuations, etc.

Obviously, those factors can change over time -- scaling up or completely mutating -- so a capacity planner must also try to forecast growth of the workload. In enterprise management, virtual administrators will have to rearrange virtual machines based upon environment changes.

If this seems complex enough, the biggest value is still missing from the equation: the acceptable performance level of every service. This usually is the most overlooked aspect of capacity planning, assuming virtualized applications will always perform at their best. In reality, even in the best arrangement, every software application needs a certain amount of physical resources to perform in an acceptable way.

Capacity planning has to consider complementary workload scenarios and must contemplate alternative arrangements to grant the expected performance for every service.

Tools that simplify capacity planning

The task appears overwhelming, but, luckily, part of it will be simplified in the near future when all virtual infrastructures are able to seamlessly and dynamically move virtual machines on different hosts, depending on their workloads. VMware just launched this feature, which is called Distributed Resource Scheduler (DRS), in its new Virtual Infrastructure 3 (also known as ESX Server 3.0 and VirtualCenter 2.0). Microsoft expects to offer the same capability with its upcoming Virtual Machine Manager tool.

These factors can be partially managed today with the help of a few products.

The first, and possibly most complete one, is from the current market leader, VMware. Its Capacity Planner tool is a consulting service, available at a fixed price of $22,000 for up to 200 servers. The biggest benefit of this tool is the huge database where it stores average performance values of industry applications. Based on those values, VMware Capacity Planner is not only able to suggest the best placement possible, but it's also able to recognize troublesome applications, both at physical and virtual levels.

VMware is not the only vendor offering this kind of tool; Hewlett-Packard Development Co., with its HP MS Virtual Solution Server Sizer, and Sun Microsystems Inc., with its Consolidation Tool, offer their customers a notable aid. In both cases, the products are free but are tuned and locked for sizing specific servers.

Once again, PlateSpin PowerRecon, already mentioned in the first article in this series, seems to be the most cost-effective solution for workload placement. Thanks to its new Consolidation Planning Module, it's able to offer the same capabilities of VMware Capacity Planner, minus the Industry Average database. Its biggest feature is the integration with the company's physical-to-virtual (P2V) product, which we'll see in the fourth article of this series, offering a logical and integrated flow of actions during the initial steps of the project.

In the next article, we'll discuss how the critical work of capacity planning translates into an economical value, which tells you whether or not the entire project is economically worthwhile.


This was first published in July 2006

Dig deeper on Data center capacity planning

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

1 comment

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close