Scaling up your data center when demand exceeds computing capacity

When outsourcing your data center for more computing capacity, there are several courses of action and options to consider before inking a contract.

This Content Component encountered an error
This article can also be found in the Premium Editorial Download: Modern Infrastructure: Can HP, IBM and Dell survive the cloud?:

When a business finds its computing demands outstrip its capacity, there are several options to meet those demands. One choice is to scale up the data center, but there are several other options to consider before making any concrete plans.

Issues to consider before expanding your data center

Part 1: A variety of options make data center expansion difficult

Part 2: Scaling up your data center when demand exceeds computing capacity

Outsource with managed services

Some organizations demand more computing capacity but lack the in-house IT staff to manage the work needed for containers or colocation services. For them, managed service providers may be just the answer.

Like colocation providers, managed service providers (MSPs) handle their own infrastructure and facilities, but the entire effort -- a complete platform with access to equipment, services and perhaps even canned applications -- is managed by MSP staff. Most MSPs provide little -- if any -- insight into their actual operations. "It's 'fire and forget' IT," said Chris Steffen, principal technical architect at Kroll Factual Data. "Customers say, 'You handle it and just let us know if we have problems.'"

But there are other substantial differences. Costs can run higher than colos because customers are paying for the MSP's management and service personnel in addition to the infrastructure. Unexpected or frequent service changes, such as adding capacity, can also spike costs.

 

MSPs are also typically less open to negotiation because MSPs emphasize cost competitiveness through economies of scale. In effect, customers use what the MSP has available and accept the provider's SLA with little change.

Measuring and verifying the level of service is often a point of contention. Don't underestimate the importance of an SLA when dealing with an MSP, said Robert McFarlane, an analyst at Shen Milsom Wilke LLC. It should define escalation paths for service and support, as well as remediation expected when problems or disruptions occur.

Still, lower flexibility can speed deployment, and a typical MSP can be engaged in a matter of weeks. The actual provisioning and spin up of services can be much faster because there are few (if any) custom tasks that the MSP must perform in advance. The biggest obstacle is usually deploying the appropriate connectivity on the customer's side. More sophisticated workloads and user bases may require fiber links through the local telco provider.

Prospective customers must pay close attention to the regulatory ramifications of putting sensitive information into third-party hands. "Check with your compliance or legal teams and determine what regulatory concerns apply to the capacity planning solution you choose," Steffen said. "Finding a solution that doesn't comply will be a disaster."

Plug into public cloud

The final option available for data center growth is the public cloud, such as Amazon Web Services (AWS). Public cloud can best be explained as colocation with the added flexibility of self-service provisioning and a high level of on-demand scalability on top of a shared virtual infrastructure.

A customer can connect to a public cloud provider, provision a server, migrate a workload and start running it in the cloud in less than 15 minutes. Public cloud customers can add or subtract computing resources on the fly in response to computing demands and then pay for only the resources (such as processing cycles) that are actually used. It's the ultimate expression of "pay to play" or "utility computing." Such control makes public cloud computing the most flexible and granular of all data center capacity options--ideal for organizations with large temporary spikes in computing demand.

Public cloud deployments are generally accessed through typical Internet connectivity, which rarely needs upgrades unless the workloads demand substantial network traffic. The two bigger issues for public cloud users are application and data suitability.

A growing number of data center applications can operate remotely from the cloud, but applications not designed explicitly for the cloud may not perform at optimal efficiency, and older legacy applications may not operate at all. Customers should test applications and measure performance over time before committing an app to the cloud.

"Refactoring the application for the cloud can offer worthwhile gains," Steffen said. "And a cloud provider can help optimize the code."

Rackspace is just one example of a cloud provider offering assessment services and access to help customers review current applications and establish deployment plans. AWS offers the SDKs, toolkits and documentation needed by programmers for cloud application development.

However, regulatory and security concerns can pose thorny issues. The challenge is particularly acute for cloud computing, where the physical location of servers and storage is purposefully abstracted--users shouldn't know or care where the computing resources are as long as they're available. But this is at odds with government and industry regulations that typically require direct control over regulated data locations.

Steffen suggested that regulators simply have not yet caught up to technology, but McFarlane isn't so enthusiastic about cloud security matters. "If people can hack the Joint Chiefs, why believe that cloud providers have security worked out?" McFarlane asked. "I don't think we know enough yet." Customers must approach security with pragmatism; some workloads simply should not be in the cloud -- yet.

Mix and match to maximize computing capacity

The best news is that these capacity options are not mutually exclusive; an organization can combine solutions and adjust that mix over time to meet short- and long-term business plans. For example, containers or colocation might be the perfect choice for that second or remote data center, while unexpected spikes in tomorrow's computing demands can be met with a cloud provider. Or multiple containers could be joined to create a full-featured facility while routine backups are sent to an MSP.

But regardless of your capacity predictions, business leaders should plan for growth now. That involves looking past simple value propositions and recognizing workload management and compliance requirements, which can become convoluted once workloads leave the main data center.

Let us know what you think. Write to us at moderninfrastructure@techtarget.com

This was first published in May 2013

Dig deeper on Data center budget considerations

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close