Virtual machines (VMs) and their corresponding hypervisors have come a long way in development and ease of use: VM creation is now done in five to six mouse clicks. Today, it’s easier
However, with such simplicity comes the uncertainty of whether a VM can handle the workload. It’s possible to over-allocate many computing resources to a VM. Administrators must set metrics and understand what's running on a VM for effective workload management. Simply clicking the Next button may not result in the most efficient workloads.
Setting metrics for effective VM workload management
Before creating a workload, a good virtualization engineer must acquire metrics to find out what the base hardware is capable of handling and what it will be running. There are several ways to do this. A machine’s loads will vary based on a number of circumstances, but there are some loads that administrators can consider first.
User count can forecast how a workload will run and what it can handle. The resources that a machine requires can vary greatly -- one VM may have 50 users connected to it, while another may have 5,000. With Exchange Server, we can use tools to establish a baseline that explains which resources are needed for a deployment. Under-allocating resources risks performance issues, and over-allocating wastes resources that can be used elsewhere. By knowing the number of current users and how many there will be in a year, a user can set the VM to begin with the resources it needs at the time of creation and allow it to grow along with the user count.
Most major deployments of services, software and other products running in a virtual environment have a vendor or manufacturer associated with it. These manufacturers almost always post baseline metrics that a server must have to operate effectively, which helps take the guessing game out of creating a VM and gives a solid idea of the resources necessary for deployment.
Resource allocation and workload management and monitoring
Storage can be used up quickly, and a seasoned virtualization expert will tell you that resources are extremely limited. Even machines with plentiful computing resources can have their supplies depleted by VMs running services that only use 5% of the resources given to them.
“Over-allocation is an issue engineers are faced with quite often. Understanding what the workload needs to run will set the precedent to what resources need to be used,” said Timothy O’Brien, system consultant at MTM Technologies Inc. “A simple Web service that [displays] an internal Web page in no way requires multiple cores or a massive amount of memory. However, running a SQL server with numerous users connecting to it at a given time would require more [resources].”
Planning ahead alleviates challenges with resource allocation. Understanding what will run on the VM and planning for expansion creates a workload that is capable of handling the demands placed on it. Because almost no workload ever stays the same, an engineer must be ready for a fluctuating virtual environment.
“VMs are not ‘set and done,’” O’Brien added. “When a deployment occurs, it’s very important to monitor and check what that workload is doing. Since resources and users fluctuate, loads on the VM will change as well.”
Here are some key points for understanding resource allocation:
- Nothing is ever set in stone. Modifying the size of a VM is very common. Some resources can be allocated right into a live workload in real-time.
- Always monitor your VMs. Know which resources VMs are using at any given moment. Workload management that involves watching workloads over time and seeing when demand peaks allows an engineer to properly distribute resources when needed.
- Know your applications. Never assume that an application will always run the same. With service packs, additional users and changes in the overall environment, applications can require more RAM, storage, and even additional processors at any given time.
Understanding and using a “stock” VM
There is nothing wrong with using an out-of-the-box virtual workload. In several instances it is even recommended. A very basic VM running simple services should only have the bare requirements allocated to it. Most stock VMs are assigned a single processor, 512 MB of RAM and default settings including a single network interface card and smaller pre-allocated amounts of storage. Many development workloads with a single application or an isolated environment that needs to be tested never require more than the bare minimum of resources to run. In these cases, having VMs up and running quickly can save time, help a test environment proceed and save precious resources for other workloads that require them.
Best practices for resource allocation
Planning and mapping out a virtual environment helps virtualization engineers create workloads that use the right amount of resources. Since resources are often used up quickly when they don’t need to be, knowing what goes into a workload before its launch will save time and headaches down the road.
When administrators are faced with decision of whether to expand their environments, purchase additional storage area networks or pump more money into workloads, they may benefit from re-evaluating their workloads’ resource needs. When doing so, there are a few workload management best practices that should be considered:
- Any good deployment will have a build or test phase. This is where you can really size your machines and know what you can allocate comfortably.
- Research what will go into a VM. Understanding the resources an application or database requires will prevent under- or over-allocation. Resource-intensive workloads include Exchange, SQL or any database back-end application. Resource-light workloads include license servers, single-service applications or simple Web application machines.
- Always be ready for fluctuation within an environment. Being prepared and knowing what resources are available for workload distribution improves your ability to respond quickly and accurately to change, creating a more balanced and stable virtual infrastructure.
This was first published in January 2011