Manage Learn to apply best practices and optimize your operations.

Practical steps for building an internal cloud

Internal cloud architecture is the best way to grow IT while maintaining a flat budget. Expert Chris Wolf explains the impact of cloud computing and how to approach setting up an internal cloud.

This article can also be found in the Premium Editorial Download: Virtual Data Center: Digging into virtual management tools:

Cloud-based application delivery is becoming an inevitable part of IT. Re-architecting IT and business processes...

for internal cloud-based service delivery will take years to complete. Now is the time to begin laying the foundation for an internal cloud architecture.

Seasoned IT folks have heard about cloud computing for years, and if they're cynical about the whole cloud concept, they have a reason to be. Other concepts, like grid computing, have come and gone without much fanfare.

But remember, x86 virtualization received a similar cold reception, and today the x86 hypervisor is the default platform for newly deployed applications in many organizations and is exactly the reason why cloud-based computing won't follow the same path as grid computing and other IT-as-a-service brethren.

Defining the cloud
You may be asking yourself, "Isn't cloud the same thing as IT as a service?" You're right, of course, but IT marketing folks don't get paid to recycle old names. Hence, we have the cloud. Here at Burton Group, we've had considerable debate about what constitutes "the cloud," and here's what we came up with:
The set of disciplines, technologies and business models used to render IT capabilities as on-demand services.

You may find that definition to be overly generic, and it is. That's because the cloud encompasses so many technologies, which can be organized into the following buckets:

  • Software as a Service (SaaS). SaaS includes purpose-built (e.g., designed to be externally hosted) applications that are hosted by vendors external to the organization. Salesforce.com and Google Apps are good examples of successful SaaS-delivered applications.
  • Platform as a Service (PaaS). PaaS offers cloud-based platforms that can be used by organizations to run their applications externally in the cloud. Microsoft Azure is an example of PaaS.
  • Software infrastructure as a service. Software infrastructure as a service is a standalone cloud service that provides a specific application support capability but not the entire software platform service. If it did, it would be PaaS.
    For example, Amazon Simple DB and Microsoft SQL Data Services are software infrastructure services. Although it's true that Microsoft SQL Data Services is included on the Azure platform, it is also offered as a standalone service. Therefore, it has its software infrastructure service designation.
  • System infrastructure as a service. System infrastructure as a service provides physical or virtual hardware offered as a service. This architecture caters to traditional virtual infrastructure vendors, such as VMware, Microsoft and Citrix.

The top three cloud tiers -- SaaS, PaaS and software infrastructure as a service¬ -- are designed primarily to be externally hosted platforms, while system infrastructure as a service can be hosted internally by the IT organization or externally by a third-party service provider. In the coming years, a large number of organizations will use virtual infrastructure to host some services internally in the internal cloud and externally in the external cloud.

Deploying an infrastructure that allows workload mobility between internal and external platforms will give IT organizations a high degree of flexibility and will reduce operational costs moving forward. For example, a number of organizations use external providers today to host training and development resources.

In addition, many companies run applications that are heavily used for a specific period during the year and are practically idle for the remainder of the year. For these workloads, external clouds make a lot of sense. Why run physical servers all year when they're only needed for three months? That accounts for a large amount of wasted power, cooling, data center floor space and administrative resources to manage the systems 24/7.

Figure 1 depicts an Infrastructure as a Service cloud model. The diagram shows three clouds. The Raleigh and Atlanta sites would be internal clouds, and the service provider would represent an external cloud. Being a site doesn't make each location a cloud. Instead, each site would provide an IT-as-a-service delivery model. Since the systems are internally hosted, the Raleigh and Atlanta sites would have internal clouds, while the service provider would represent an external cloud. The external cloud may be integrated with the internal cloud's virtual infrastructure management layer to allow internal VM images to be deployed and run on the external provider's infrastructure; this may support tasks such as training and development, for example.

Figure 1: Connecting internal and external clouds

Server virtualization as a cloud foundation lets organizations continue to use the operating systems and applications they do today while not requiring developers to learn different and possibly proprietary programming APIs for an external cloud provider.

In addition, the hardware abstraction offered by virtualization gives organizations the flexibility to move workloads as needed, which allows all applications, even those that are not cluster-aware, to be highly available.

Virtualization can simplify a lot of IT tasks, such as deployment, patching, updating, scheduled maintenance and disaster recovery. But it also introduces new challenges.

Our clients say that compliance concerns remain the primary barrier to adopting enterprise public clouds. If providers want enterprises to seriously consider their platforms, then they will have to do a better job of addressing existing compliance and security concerns around public shared infrastructures.

Laying a foundation for the internal cloud
The x86 hypervisor should be the foundation for a movement to internal clouds, but that's just the start. Like it or not, once the external cloud matures, it will be harder to keep business units in the dark about the cost of IT.

Although virtual machine (VM) hosting in the external cloud is here today, too many questions remain regarding security and compliance to allow enterprises to be comfortable leveraging any type of external shared physical infrastructure for VM hosting.

Moving to a cloud-based infrastructure is a steady, deliberate process. The total cost of ownership reduction that can be derived from a cloud-based architecture is well worth the investment. Here are some practical steps to begin your move down the path toward an internal cloud architecture:

  • Make x86 virtualization the default platform for all newly deployed x86 applications.
  • Invest in virtualization-aware application management tools.
  • Begin work to realign business processes and implement chargeback.
  • Leverage orchestration tools to automate common IT processes.
  • Leverage self-service provisioning for training and test and development.
  • Consider the impact on security and regulatory compliance with all cloud architectures.
  • Design it to avoid lock-in.

Make virtualization the default x86 application platform
Today, x86 servers are being purpose-built to run virtualized workloads, removing many use cases for purchasing dedicated physical hardware for single applications. In fact, many applications cannot take full advantage of the hardware in one of today's newer 2U form factor servers. Table 1 contrasts the hardware in today's latest 2U form factor servers.

Table 1: Today's Hardware for Virtualization

Server Max memory CPU Expansion
Dell PowerEdge r710 144 GB (18 slots) 2 quad core Xeon 5500s 2 PCIe x8, 2 PCIe x4
HP Proliant DL380 G6 144 GB (18 slots) 2 quad core Xeon 5500s Up to 6 (PCIe, PCI-X)
IBM x3650 M2 128 GB (16 slots) 2 quad core Xeon 5500s 4 PCI-Express

The hardware-assisted memory virtualization offered by the new server platforms, such as Intel EPT and AMD RVI, also removes the substantial memory performance latency that plagued multi-threaded enterprise applications running in virtual machines in the past. Virtualization's consolidation, mobility and disaster recovery benefits alone are justification to migrate existing applications to a virtual infrastructure. Now that the substantial improvements brought by hardware-assisted virtualization remove performance as an issue in most applications, virtualization should be the default platform.

Of course, if you have application clusters that require the full compute power of the bare metal, those applications should remain "unvirtualized" for the time being and remain at the end of your virtualization to-do list. Note that many application owners may offer some resistance, so education will be needed.

The tables have been turned because IT no longer needs to "sell" virtualization to application owners. Instead, application owners should justify why an app needs the bare metal.

Invest in virtualization-aware application management tools
Treating infrastructure as a cloud may be ideal for users, applications and business units that need IT services quickly, but it can be quite painful for the IT staff. If an application is experiencing a performance issue, how do you quickly determine the physical data path?

Securing the cloud
Ideally, a U.S. Department of Defense-like standard security model would work best for cloud service providers. Assume that the following options were available:  

Level A: Dedicated physical and virtual infrastructure, including dedicated server and networked storage assets.

Level B: Dedicated virtual and physical server infrastructure, shared/logically zoned storage infrastructure -- clients receive dedicated LUNs, but data traverses a shared physical SAN.

Level C: Shared virtual and physical infrastructure, isolation provided by dedicated virtual security appliances, such as VM firewalls, IDS and IPS.

Level D: Shared virtual and physical infrastructure, no appliance-based segmentation and isolation. For example, isolation is provided via VLANs.  
The tiered model would be a good option because it can be easily consumed by security auditors. My sample model is just a starting point for discussion.

In time, external cloud security concerns will be addressed via technology, standards, compliance bodies and defined intrepreprations within the auditing community. Molding an internal infrastructure to fully leverage an external cloud will take years to complete. However, there's no reason to wait for the external cloud to fully bake before applying cloud principles to your management infrastructure.

If a security auditor asks you to provide the physical location of an application and all of its dependent resources, how do you do that? Sure, the hypervisor is but one layer of abstraction. But throw in single root I/O virtualization on 10 GbE NICs, multi-root I/O virtualization and storage virtualization, and now you have four layers of abstraction.

Even if you're not using all of these forms of virtualization, it's good to plan for their presence within your infrastructure. Select tools capable of seeing through each layer and that can provide the diagnostic information you need.

Realign business processes and implement chargeback
The technology to deliver an internal cloud is present today and steadily maturing. But realigning procurement, asset management, security policies, support processes and accounting processes, to name a few, calls for a substantial amount of work. Getting the most out of a shared physical infrastructure requires IT to take ownership of physical assets and charge back individual business units.

Organizations that have adopted business processes for outsourced IT, such as some government agencies, are better equipped for cloud-based IT. They probably already have the processes and accounting procedures necessary to treat internal IT as a service.

Although organizations can continue to travel down the path toward department-level physical asset ownership, doing so reduces the economies of scale possible with a shared infrastructure. It can also result in additional operational expenses and total cost of ownership (TCO).

Consider cloud architectural impact
Security and regulatory compliance concerns remain a key barrier to doing more with shared physical infrastructure. Progress on standards and compliance interpretations is helping auditors. Several vendors, such as Tripwire, ConfigureSoft and Third Brigade, are offering tools that assist with compliance validation.

Still, the typical enterprise today is most comfortable with some form of physical isolation. Most organizations use separate physical clusters to isolate security zones.

Others provide isolation at the network level by dedicating virtual switches and physical network ports to each zone. Security subzones -- different internal trusted zones, for example -- may share a virtual switch with isolation provided by VLANs.

The benefits of self-service provisioning
From a user's perspective, ordering a test server doesn't have to be any different than buying some music on iTunes. There are plenty of self-service provisioning engines available, including the following:

  • Citrix Essentials for XenServer and Hyper-V
  • Microsoft System Center Virtual Machine Manager User Self-Service Portal
  • Surgient Virtual Automation Platform
  • VMware Lab Manager
  • VMLogix LabManager

Self-service provisioning reduces the TCO associated with deploying new development, test and training systems and falls into the no-brainer category among virtualization management software investments. Many tools include quota enforcement and integrated lifecycle management and allow unused VMs to be retired.

Self-service provisioning is an easy step toward an internal cloud because users simply order a service through a Web interface, much in the way they order services for their home. In the end, how the service is delivered isn't important, as long as it meets users' expected performance and reliability levels.

Use orchestration tools to automate processes
User self-service is a natural first step toward an internal cloud, while you consider virtual infrastructure orchestration and chargeback as steps 2 and 3. Orchestration allows many common IT management processes to be automated, including the following:

  • Server provisioning and decommissioning
  • Change control enforcement
  • Automating VM startup or failover following an outage
  • Rebalancing VM workloads across physical hosts
  • Resizing VM hardware based on performance needs
  • Powering down unneeded physical resources

Again, any type of dynamic VM orchestration or workload rebalancing will need to include awareness of security zoning restrictions. Most orchestration tools have graphical workflow engines that allow administrators to quickly string together a series of processes to automate a particular task.

In the case of server virtual machine deployment, the workflow could include creating the VM, associating it with the correct VLAN, mapping storage and assigning the proper administrative roles. A series of unique workflows could be created for each VM type, organized by factors such as department or physical location.

Disaster recovery automation is another growing use for virtual infrastructure automation, with tools such as VMware Site Recovery Manager and Citrix Workflow Studio providing the capability to pre-script a disaster recovery failover response. This allows you to preprogram the VM restart order so that the most critical applications are returned to service first.

Some elements of orchestration will likely be application-specific and workload-specific. For example, dynamically adding resources -- such as memory, CPU or storage -- to a running VM sounds like a great way to non-disruptively solve a performance problem. But hot resource add requires both the guest OS and guest OS applications to support it.

You may have an application running on the Windows Server 2008 operating system, which supports hot add, but the application is programmed to consume a fixed amount of memory when it is started. Furthermore, some applications cannot acquire more memory without a restart.

Dynamically shutting down servers sounds like a great way to save power and cooling costs, but most organizations are not comfortable implementing dynamic power management for their servers without official support from their server hardware vendors. Burton Group expects server vendors to begin to support dynamic power management this year, so the support concern should be resolved in time.

Avoid lock-in in the architecture
Designing a cloud-based infrastructure that is compatible with external cloud providers requires careful attention to lock-in when selecting platforms. A cloud platform that needs a proprietary programming interface and is only hosted by a single provider is a recipe for lock-in and potentially high exit costs. On the other hand, a virtual infrastructure-based cloud platform that is hosted internally and available from a large number of external providers gives enterprises a choice as well as bargaining power going forward.

Regardless of how convinced you are about the future of external cloud, using cloud principles internally can yield significant operational expense savings. Taking practical steps to arrive at an internal cloud model is the best way to continue to grow IT while maintaining a flat budget.

Lay the foundation using virtual infrastructure, implement user self-service provisioning and then move to orchestrating other common IT tasks. Aligning business and IT processes to support chargeback within a virtual infrastructure requires considerable effort, but the long-term flexibility provided by a system infrastructure as a service cloud-based IT delivery model is well worth it.

ABOUT THE AUTHOR: Chris Wolf, an analyst at Midvale, Utah-based Burton Group's Data Center Strategies service, has more than 15 years of experience in the IT trenches and nine years of experience with enterprise virtualization technologies. Wolf provides enterprise clients with practical research and advice about server virtualization, data center consolidation, business continuity and data protection. He authored Virtualization: From the Desktop to the Enterprise, the first book published on the topic, and has published dozens of articles on advanced virtualization topics, high availability and business continuity.


This was last published in March 2010

Dig Deeper on Colocation, hosting and outsourcing management

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close