Tracking the evolution of data center virtualization

Business initiatives spur development of data center virtualization and consolidation thanks to added promise and efficiency of cloud technologies.

Data center virtualization is firmly entrenched in many diverse enterprise server rooms, and it’s easy to see why. Data center virtualization vastly improves the utilization of hardware by reducing the capital cost of new equipment while also lowering power and cooling needs.

But data center virtualization and consolidation still have a long way to go. What do data center owners need to know how to make the most of this valuable technology?  Stephen Bigelow, senior technology editor, talks with Bill Kleyman, virtualization architect with MTM Technologies about the trends and issues in server consolidation for 2012.

Stephen Bigelow: What’s on tap for data center virtualization in 2012?

Bill Kleyman:
One of the most important business-related actions on the minds of corporate executives will be growth. The mind-set is, “We want to grow. How can our IT infrastructure help support this initiative?”

Server virtualization has taken the standard data center and has created an agile environment capable of rapid growth and expansion. The true revolution in 2012 will be the marriage of data center virtualization with the now widely adopted cloud technologies. Administrators will now create expansive distributed environments that can provision and de-provision resources as needed. By using consolidated hardware — Cisco Unified Computing System or HP blades — environments are able to diversify their data offering. Even smaller organizations are capable of rapid growth largely because of greater capabilities stemming from advanced data center and WAN technologies. Organizations looking to consolidate and expand are able to take their environment — or part of it — and place it into the cloud. From there, they are able to replicate entire workloads over the WAN to a safe remote site and continue their operations. Whether it’s DR, testing and development or just remote data center hosting  the ability to delivery fast, scalable solutions via a distributed — but still much more consolidated environment — is becoming a true reality for many organizations.

Data center consolidation has evolved with the current technology present. Cloud environments allow administrators to remove older workloads and utilize agile remote servers to deliver the data efficiently to any end-point. We are able to do much more with more data; however, this is all being hosted on fewer, more powerful hardware components. We’re not just talking about servers either. Data center consolidation revolves around data and resource-aware technologies capable of managing large amounts of information efficiently. This means SAN environments have more powerful data deduplication capabilities or virtual infrastructures are able to mange widely distributed resources much more effectively. This is where the consolidation revolution occurs— with the unification of major data center components and their harmonious interaction with the cloud.

Bigelow: What are the differences between unified communication and unified computing? When would a data center adopt either of these technologies?
 

Kleyman: It’s important to know the differences in these technologies especially since they play such an important role in the modern data center. “Unified” means the collection of technologies made to work together to bring a more efficient solution to the end-user.

To clarify the difference we need to understand the distinct definitions of both technologies.

Unified communications (UC) refers to both real-time and non-real-time delivery of communications based on the preferred method and location of the recipient. Unlike unified messaging systems which collect messages from several sources — such as email, voice mail and faxes — but holds those messages only for retrieval at a later time, unified communications allow an individual to check and retrieve email or voicemail from any communication device at any time.  

A unified computing system (UCS) is a data center architecture that integrates computing, networking and storage resources to increase efficiency and enable centralized management. When UCS is delivered as a product, hardware and software are designed or configured to work together effectively.

A good example of this type of design is the Cisco Unified Computing System (UCS). This is an x86 architecture data center server platform composed of computing hardware, virtualization support, switching fabric and management software. The idea behind the system is to reduce total cost of ownership and improve scalability by integrating the different components into a cohesive platform that can be managed as a single unit. Just-In-Time deployment of resources and 1:N redundancy are also possible with a system of this type.

Any organization looking to consolidate their environment and reduce their hardware, networking and resource-heavy footprint should look at unified computing, which can help establish a more robust data center environment capable of growth and easier manageability and make the entire infrastructure more agile.

Bigelow: The promise of unified computing is compelling, but how can adopters avoid vendor lock in and sourcing problems (what if the vendor goes under)?

Kleyman: This has always been an issue when it comes to unified computing. The truth here is that once you purchase a Cisco chassis – you will have to stick with Cisco blades. Similarly, if an organization goes with an HP chassis, they’ll have to stick with HP blades.

However, an intelligent infrastructure is a flexible one. This means, even with unified computing, it’s important to look at device and software interoperability. An environment can go with one type of unified computing system, but still have a different vendor at their core networking infrastructure. In these cases, it’s very important to know how your environment will work with different devices in the data center. With careful planning, it’s possible to have a robust, diverse environment where a few vendors are working together to deliver a powerful solution.

The most important element when designing a unified computing infrastructure is the planning piece. It’ll be in the company’s best interests to choose vendors with good reputations, especially if a large portion of the environment will be dedicated to core production systems.

Bigelow: Cloud is still coming. How should organizations choose between public, private, and hybrid cloud options in 2012?

Kleyman: The answer lies in the project or operational initiative. There are core reasons for going with one type of cloud or another.

A small organization looking to expand rapidly, but not invest heavily in data center infrastructure will want to look at public cloud options. Public clouds are owned and operated by third party service providers. Customers benefit from economies of scale, because infrastructure costs are spread across all users thus allowing each individual client to operate on a low-cost, “pay-as-you-go” model. Another advantage of public cloud infrastructures is that they are typically larger in scale than an in-house enterprise cloud, which provides clients with seamless, on-demand scalability. Public clouds are also great for testing and development where an organization does not want to dedicate existing hardware for potentially temporary systems.

Organizations looking to control their cloud presence may want to look at private cloud architecture. The private cloud--or internal cloud--describes cloud computing without using third parties. The private cloud usually sits behind the firewall of an organization and is available to authorized users both inside and outside of that firewall. A great example of this would be private cloud-based application virtualization or desktop delivery. Citrix’s XenApp delivers applications to users located anywhere, working on any device, with access to the workload at any time. This private cloud is controlled within a corporate data center. In this scenario organizations would, for the most part, manage their own hardware systems instead of leveraging outside providers.

Finally, the hybrid option is a cloud platform being adopted by numerous organizations. The biggest benefit is that systems can be placed in their most efficient state. Workloads that must stay local and internal can be deployed via a private cloud, while development or disaster recovery (DR) environments can still be hosted by a third party provider. Hybrid clouds combine the advantages of both the public and private cloud models. In a hybrid cloud, a company can still leverage third party cloud providers in either a full or partial manner. This increases the flexibility of computing. The hybrid cloud environment is also capable of providing on-demand, externally-provisioned scalability. Augmenting a traditional private cloud with the resources of a public cloud can be used to manage any unexpected surges in workload.

Bigelow: How are hypervisors evolving, and what impact will this have on server consolidation in 2012? For example, what’s coming in Hyper-V 3.0 with Windows 8?

The evolution of the hypervisor takes us into the conversation of “unified everything.” Hypervisors are being further developed to integrate with DR systems, different vendors and have more efficient storage capabilities. VMware, Citrix and Microsoft have all come a long way with their technologies. The ability to granularly control entire workloads across the wide area network (WAN) becomes a necessity as more data centers enter a virtual state. The idea behind a data center virtualization hypervisor is to make virtual machine (VM) guests run as optimally as possible. This means making the hypervisor work better with all of the solutions that plug into it. We are seeing better integration with networking, storage, underlying hardware components, backup systems and much more. Now, we are seeing an emergence of cloud-to-hypervisor integration as well. Site recovery options built directly into the hypervisor allow administrators to replicate environments over the SAN and WAN to create a truly redundant virtual infrastructure.

Expect all vendors to really step up their hypervisor development in 2012. Microsoft’s Hyper-V 3.0 looks to include many truly enterprise features which many organizations are going to enjoy working with.

Some new capabilities will include:

  • Simultaneously live migrate both the VM and the VM's disk to a new location.
  • Live migrate VMs without shared storage
  • Network interface card (NIC) teaming without special third-party hardware--something VMware has already been doing
  • Drag and drop files from one virtual machine to another to directly transfer without having to pass through the host or your workstation.
  • The ability to host virtual disks on file servers - CIFS, SMB, NFS

Even with Windows Server 8, we can expect Hyper-V to make the architecture even more scalable than it ever was before.

In the beta tests, we have already seen some great performance:

  • A Hyper-V host can support up to 160 CPU cores;
  • 2 TB of RAM
  • 4000 virtual machines per cluster
  • 63 nodes per cluster

Virtual machines will support up to 32 virtual CPUs and 512 GB of RAM. There also will be a new virtual hard disk (VHD) format, VHDX. According to Microsoft, VHDX should be faster and can exceed the 2 TB size limit of a VHD file.

Windows Server 8 looks to bring consolidation to the OS by create domain controllers that are virtualization aware and embedded data deduplication (dedupe) support. For example, Server 8 will have the capabilities to dedupe files within a VHD file. So if you have some VHD files all with a Windows Server 8 installation, the OS can be configured to dedupe identical files to save storage! This will also happen when you copy a number of files between two Windows Server 8 host — the network stack will do deduplication as well.

In 2012, the industry will see expansive development on the concept of an intelligent infrastructure – where environments will deploy unified architectures with a cloud presence. This will enable agile growth and the ability for IT data centers to evolve with the diverse needs of the business environment.

That’s all the time we have for today. On behalf of Bill Kleyman, thank you so much for listening, and you can learn more about server virtualization and consolidation at SearchDataCenter.com.

 

This was first published in April 2012

Dig deeper on Data Center Systems Management and IT Operations

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close