Manage Learn to apply best practices and optimize your operations.

Using virtualization for cost-effective servers

If it's done right, optimizing server utilization with virtualization can save data centers money.

This article can also be found in the Premium Editorial Download: Virtual Data Center: Using virtualization to save data centers money:

Everything old is new again -- at least when it comes to data center computing. Early mainframes presented a centralized...

computing strategy, but this eventually gave way to a distributed computing model where servers appeared in departments and branch offices.

Today, the proverbial pendulum has swung back toward a centralized model where servers are being moved to a single data center. This presents a unique challenge for IT professionals, who are often called on to support an ever-increasing number of servers and business processes with a decreasing budget. The good news is that there are several practical ways to address these problems and keep servers as cost-effective as possible.

More on VM and server optimization:
Virtual machine performance monitoring approaches and tools

Stop server monitoring tools from crying wolf

Virtual infrastructure management challenges: VM sprawl and security

Control VM sprawl in your virtual server infrastructure

How to optimize VM placement in data centers

It might seem like IT professionals would be eager to embrace any technologies that allow their companies to optimize IT operations, but the economic slowdown has made them hesitant to invest in any new changes -- even those that could improve efficiency.

"Let's say I can save $100,000 over the next two years in capital expense and some of my operating costs," said Scott Gorcester, the president of Moose Logic, an IT technology provider in Bothell, Wash. "The problem is that it's going to take a $100,000 project to do that."

Centralizing all of an organization's computing assets in one location can mean even greater challenges for existing data centers, such as space limitations, along with shortages of power and cooling. But perhaps the biggest issue for data centers is wasted processing power. Experts say that most traditional servers running basic business applications operate at only 5% to 10% utilization, leaving the bulk of computing resources unused. As new applications and services are added to data centers, the need to reduce waste and save resources is critical to improving server cost-effectiveness.

Virtualization in server cost-effectiveness
Server virtualization has emerged as a preeminent data center technology that can improve utilization and address cost-effectiveness. Virtualization converts each application and operating system to a unique virtual instance or virtual machine (VM) while disconnecting the instance from the underlying server hardware, drivers and other elements that traditionally tied hardware and software together.

By mounting numerous VMs on the same physical server, data centers can dramatically consolidate server use. This reduces the number of physical servers as well as corresponding facility needs. Virtualization also offers data centers greater flexibility, allowing easy VM migration between even dissimilar physical servers to maintain availability during server failures or planned downtime. The key for data center administrators is to balance consolidation with performance and failover protection.

Because virtualization potentially supports several VMs on a given server, use of the server's computing resources is significantly higher. It's not uncommon to find a virtual server using 50% or more of its computing resources. The cost benefits of virtualization are well understood: There are simply fewer servers, and each virtualized server can perform more business work per dollar spent on hardware, power, cooling, management and maintenance.

The key for data center administrators is to balance consolidation with performance and failover protection.
,

But the combined computing demands of VMs can tax even the most powerful server. Cost-effectiveness doesn't mean excessive consolidation; rather, it means balancing workloads between multiple servers. If you've got two workloads -- one that's CPU-intensive and one that's memory-intensive -- together they may be good fits on a single server for virtualization purposes, said Bob Plankers, a technology consultant and blogger for The Lone Sysadmin.

As multiple workloads attempt to access the same disk channels, disk I/O is another constraining factor. Storage is often passed to an iSCSI or Fibre Channel storage area network for better performance, less contention and more data protection. Current server models can usually receive CPU, memory, disk, or network upgrades to accommodate additional or particularly demanding virtual workloads. Upgrades can be an economical means to extend the life of servers and hold off costly technology refreshes.

A server that's offline or crashed is extremely cost-ineffective, so keeping the virtual server running and accessible is a crucial consideration in hardware cost-effectiveness. Many corporations need their business operations running 24/7, especially to support global offices, customers or partners, from a single data center. To achieve availability in virtual servers, data centers should have redundant physical design elements such as power supplies, network cards and fans as well as a certain amount of load balancing and failover planning.

Although it seems counterintuitive, data centers should never seek 100% utilization on virtual servers. Instead, they should leave an ample amount of processing capacity available to assume the processing burden of other machines. This strategy helps eliminate single points of failure.

Just consider what would happen if a data center had two servers, each running five VMs and using close to 100% of server capacity. If one server failed, there would be no spare processing capacity on the companion server and the affected VMs would remain offline until the server was restored.

Ideally, a data center would want about 50% headroom on each server so that all the VMs could run on a companion server. As another example, if three servers ran VMs, about one-third more headroom would be needed on each server.

Similarly, data centers should plan the process for VM failover. Tools like VMware's Distributed Resource Scheduler can recover failed VMs from storage and recover on other machines with available computing resources. But it always helps to apply rules and restrictions to prevent incompatible VMs from inadvertently coexisting on certain servers. Fault-tolerance software such as Marathon Software's everRun VM provide even more availability by hosting redundant instances of critical VMs on clustered servers. When one server fails, the duplicated VM instance takes over without interruption.

Optimizing hardware for cost-effective server utilization
Cost-effectiveness often means getting more utilization from server hardware, but this requires insight about each server's hardware, the ways that hardware is utilized by each VM and the underlying operating system hypervisor. Performance monitoring tools can track and report on the computing resources needed by each VM. Several third-party performance monitoring products are available, but the major hypervisor vendors offer suitable tools. For example, VMware Infrastructure 3 provides an SRVC tool that can report on a wide range of resources consumed by a VM (see Figure 1).

Figure 1

Click to enlarge.
Hypervisor products like VMware Infrastructure 3 provide monitoring tools that can track and report on a system's resource utilization.

Once the resource demands of a VM are identified, it is also possible to tailor many of the server resources made available to that VM using native hypervisor tools like Hyper-V Manager (see Figure 2). As an example, it is possible to adjust the number of CPU cores, the amount of memory and other server resources allocated to a VM. Less critical VMs can receive fewer resources, while vital ones can be configured to receive more. More VMs can be supported while forestalling costly upgrades or outright server replacement.

Remember that a virtual server should always reserve some unused headroom to accommodate VMs failed over from other troubled servers, so upgrades and VM load balancing may be needed to ensure adequate computing capacity. Generally speaking, headroom recommendations shrink as the total number of servers increases because it is possible to spread out the failed-over VMs across a greater number of physical servers.

Figure 2

Click to enlarge.
Hypervisors can also provide tools to tailor the allocation of computing resources to virtual machines to optimize server resources.

It's important for data center administrators to consider the implications of VM sprawl, which is the unchecked proliferation of VMs across servers. VMs are so easy to create that a new one can be started in a matter of minutes. If not monitored and controlled, VMs can tax server computing resources and create excessive VM management headaches that impair the cost-effectiveness that virtualization can offer. Tools and policies can be implemented to help mitigate the effects of VM sprawl.

"There is still a pervasive approach toward deploying tactical management tools to control VM sprawl to assist with maintaining the cost efficiencies that have been gained from deploying VMs," said Allen Zuk, the president and CEO of Sierra Management Consulting LLC, an independent technology consulting firm based in Parsippany, N.J.

So why not just upgrade the server's CPU, memory, disk subsystem or network adapter? A second CPU, for example, can be added to an available socket on a server's motherboard, or an existing CPU can be replaced with a faster model or one with additional cores, depending on the capabilities and limitations of the individual motherboard. Memory can be added to unused slots or replaced with larger modules. Low-end SATA disks and controllers can be replaced with high-performance SCSI disk systems or omitted in favor of network storage. Network adapters can be upgraded but are more frequently paired with a second or third NIC installed on the server.

Upgrades are easy to perform, but they're also an easy way to waste money. The trick is to determine the financial viability of each upgrade.

Upgrades to CPUs and large memory modules can be expensive but can certainly be cost-effective when the server is new. It's also important to remember that upgrade components may simply not be available for some proprietary systems, especially older proprietary systems, or they may command a premium price that reduces the cost-effectiveness of the upgrade.

Upgrades are easy to perform, but they're also an easy way to waste money.
,

Conversely, it may be more cost-effective to allocate the upgrade cost toward a new server. A new server is certainly more expensive than upgrade components, and the labor needed to install and configure a new server can be substantial. But new servers often contain far more powerful processors, chipsets and memory as well as a paid maintenance period. This can make purchasing a new server more attractive than grappling with older servers that are nearing a technology refresh.

"You might end up spending a couple of thousand dollars on more RAM for it, where you could just apply that money to a much faster, more modern server," Plankers said. In many cases, the older server can continue to add value to the business by being reallocated to a test and development environment or a secondary environment.

The value of used IT equipment should not be overlooked. Companies have quickly discovered that the poor economy has flooded the used-equipment market with loads of slightly used top-tier servers at extremely competitive prices. Data centers can affordably purchase slightly older equipment that still provides three or four years of useful operation.

It's not enough to simply provide computing resources to run important VMs. Optimization for cost-effectiveness should also take into account server and network failures and their impact on user productivity. Data center managers should review their server configurations and network architectures and locate single points of failure.

These single points of failure may be perfectly acceptable in secondary or nonessential servers, and VMs running on those platforms can easily be restored on other available servers. But core applications may demand greater reliability elements, such as redundant power supplies, redundant network adapters or a server clustering strategy with fault-tolerant software. Data centers may host mission-critical VMs on a small number of highly available servers while relegating noncritical VMs to older or less expensive systems.

There is no doubt that the implications of cost and efficiency will weigh heavily on corporate data centers long after the current economic downturn ends. Gorcester said he sees trends continuing in much the same way they are now: using any available tactics or technologies to improve server efficiency and prolong the working life of server hardware. Gorcester added that the value of used equipment points to the used-equipment market as a source of affordable top-tier equipment.

"Organizations may start looking harder at alternatives such as data center outsourcing and managed hosting services," Zuk said. "Let's not forget cloud computing is making significant strides in establishing effective security for the cloud, thereby providing more flexibility in the VM space."

Checklist for cost-effective server strategies

  • Take advantage of monitoring and reporting tools. Getting the most from your virtual servers requires insight into the computing resources needed by each virtual workload, so adopt software tools that can monitor and report on resource loads. Hypervisors normally provide built-in tools and controls that allow data center managers to adjust resource allocation to each workload.

     

  • Maintain interoperability between tools and servers. When selecting tools and servers, pay attention to interoperability. This helps to ensure that one tool can monitor and report on all of the available servers, which is more cost-effective than juggling multiple different tools from a variety of vendors.

     

  • Avoid VM sprawl. Virtual machines are easy to create, often leading to unnecessary proliferation across data centers. This translates into more management overhead and potential security risks that can reduce the cost-effectiveness of virtualization. Implement policies that define the conditions necessary for creating a new VM. Only a limited number of IT personnel should have authority to create and manage VMs.

     

  • Don't overuse a server's computing capacity. Pushing server utilization to 100% may seem cost-effective, but high utilization levels may leave inadequate computing resources to handle VM failover from other servers. This can force important VMs to remain offline and unavailable for extended periods. Leave a suitable amount of resource headroom available on each server to carry some portion of the VM load from failed servers.

     

  • Consider the timing in any upgrade plans. Upgrades are typically less expensive than new server purchases and can extend the working life of some servers. But it may not be cost-effective to fund upgrades on proprietary hardware or older servers that are nearing the end of their production lives. Depending on business needs and financial circumstances, it may be more cost-effective to invest funds for an upgrade in a new and more powerful server.

     

  • Consider the disaster recovery implications of server virtualization. Backups are often problematic for nonvirtualized infrastructures, but VM protection can be much easier with the copy and migration features of virtualization. One way to improve the cost-effectiveness of virtual servers is to routinely migrate VM copies to backup or off-site storage.

     

  • Allocate older servers to secondary tasks. It's a simple matter to extend the useful life of older servers by reallocating them from production to secondary tasks such as backup or test and development.

     

  • Eliminate single points of failure in servers and networks. Inaccessible or offline servers are worthless, so take all reasonable steps to eliminate single points of failure within the server and a LAN. This is most important for servers and network segments handling mission-critical workloads.

    ABOUT THE AUTHOR: Stephen J. Bigelow, a senior technology writer in the Data Center and Virtualization Group at TechTarget Inc., has more than 15 years of technical writing experience in the PC/technology industry. He holds a bachelor of science in electrical engineering, along with CompTIA A+ and Network+ certifications, and has written hundreds of articles and more than 15 feature books on computer troubleshooting, including Bigelow's PC Hardware Desk Reference,/i> and Bigelow's PC Hardware Annoyances. Contact him at sbigelow@techtarget.com.

This was last published in April 2010

Dig Deeper on Virtualization and private cloud

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close