Businesses face a paradoxical dilemma: How do we cut IT spending while expanding the scope of operations and productivity?
Many organizations identify data center outsourcing and cloud or managed server hosting as a viable strategy for growth, systematically moving all but the most mission-critical applications and data off of in-house systems to hosted infrastructure.
Outsourcing supplants irregular capital expenses with monthly operating expenses that are easily managed and scaled as computing needs change. However, potential hosted server adopters must account for several outsourcing limitations when formulating their growth strategy.
Realistic outsourcing expectations
It's hard to accept, but you don't control the data center that hosts outsourced applications. A lot is beyond your control: downtime, slow application performance or loss of wide area network (WAN) carrier connectivity during upgrades, storms or other events.
An understanding of outsourcing limitations eases the transition from in-house management to a third-party provider and allows IT to adjust each application accordingly. For example, it might make perfect sense to outsource a lightly used accounting application, but the busy transactional database your business depends on should remain in-house.
While cost savings are clear in server count/power consumption when you move from owned data centers to hosted, managed systems, there is a performance gap.
"Outsourcing can become a nasty surprise on the WAN side when the T1 line starts to get grumpy after users try to run concurrent [remote desktop protocol] sessions to servers in a remote data center," said Pete Sclafani, COO of 6connect Inc., a network control company based in San Francisco. "Having the right expectation on [local area network] LAN traffic migrating to the WAN is always a tough conversation."
Since applications run the organization, outsourcing turns a managed server hosting or even colocation space provider into your business partner. Visit the company's facilities, ask targeted questions of on-staff experts and acquire an understanding of the architecture that supports the applications.
Each hosting and colocation provider's architecture, resilience and performance varies dramatically. For example, it is possible to find providers that specialize in high-performance, high-resilience facilities for truly mission-critical applications, if you're willing to pay for it.
Keep an eye on IT
Many businesses approach outsourcing systematically, and the modus operandi is to select candidates, take initial performance benchmarks, migrate the application to an outsourcing target, measure post-migration performance and objectively determine the migration's success for each application.
After that initial migration, regular performance monitoring will flag early warning signs of potential application problems.
"Use off-the-shelf tools, such as [Ipswitch] WhatsUp Gold, to monitor LAN and WAN traffic performance, latency and stability," said Scott Gorcester, CEO of VirtualQube, a hosting and managed services provider outside of Seattle. "Then, depending upon the [provider] model you're working under, you'll want tools to monitor the back-end hardware."
Proper availability and performance monitoring are both key to enforcing the service-level agreement (SLA). This defines the hosting provider's commitments and outlines any consequences for unmet stipulations. It is often your responsibility as the customer to pursue recourse, and such action requires objective documentation via properly deployed monitoring tools, Gorcester said.
Problem resolution usually requires prompt and effective communication. Organizations must have a clear understanding of the hosting provider's support structure and escalation path -- if those details aren't in the SLA, ensure that in-house IT staffers know the proper support channels.
"Many times, companies that outsource don't address issues until something happens, which is always a difficult way to learn and puts stress on a new vendor relationship," Sclafani said. Therefore, IT should test support resources and be familiar with the provider's system before a problem arises.
Handling multiple providers
Organizations may opt to use multiple hosting providers to ensure a level of resilience. Multiple outsourcing partners can also stave off lock-in. But, managing more than one colocation site or applications spread around various hosting providers -- and potentially porting those workloads between competitors -- is problematic for IT staff.
First, set a clear goal -- know what you're trying to accomplish and why it will pay dividends for the business. For example, mixing providers to gain multiple tiers of service is often easier than doing so to increase portability. One provider might host mission-critical Microsoft Exchange servers while another hosts development virtual machines (VMs) -- migration between the two won't come into play.
Workload portability involves complicated dependencies. Providers are not in the business of ensuring interoperability with other providers. Managed hosting toolsets typically serve the provider's own infrastructure first and their customers' environments second.
"If you expect workload portability, consider things like the hypervisors used by the different providers and whether you want some kind of data replication tool running so your data set is kept up to date in your target data center," Gorcester said. "The specific tool you use will depend in part on your desired [recovery point objective] and [recovery time objective]."
A provider can help ensure portability from one hosted infrastructure to another -- to a point -- but the customer must test it out and ensure that VMs readily transfer between providers, or between hosted and in-house data center facilities. It's an ongoing issue for organizations because a provider might alter its environment and adversely affect every workload's portability. For example, if one provider changes hypervisors, cross-environment workloads suddenly experience hypervisor incompatibility. Establish a firm line of communication and support with each provider if you go this route.
A second issue occurs in replication, recovery and other data protection tasks. Your IT staff must take charge of recovery and testing efforts to verify that hosted workloads and data are adequately protected and recoverable within an acceptable recovery time objective.
"We have seen many examples of incorrect operational processes or even a flawed initial setup and lack of testing," Sclafani said. "There can be data loss in the recovery process that was completely preventable."
Make cents of it all
Data center outsourcing is a viable fiscal option for many organizations. It means a business can shift expenses from capital outlays on fixed resources to monthly operating expenses that scale easily to respond to demand -- without overhead for maintenance, cooling and the other pains of data center ownership.
Few IT professionals are educated about the financial aspects of a business, which makes the data center outsourcing argument hard to present. The traditional paradigm of quarterly budget depletion undermines the value of outsourcing.
IT, finance and executive leaders all must understand how this new computing paradigm will generate substantial ROI. It will be much easier to understand the larger role of computing within the business, which in turn will solidify the case for similar future technological investments. When you get finance and technology leaders involved early on, it helps your organization accurately assess the long-term impact of outsourcing decisions.
"The result [of poor communication] is usually a substandard solution that fits the [capital expenditure] model, but may not be the best solution for the technology needs and the technical budget," Sclafani said.
For example, the company ends up with software as a service products that cost five times more than the in-house equivalent -- money that could have been used to fund other projects or lower the long-term cost of the app by hosting it on dedicated servers.
Analyze and compare the potential costs for outsourcing and keeping operations in-house. Consult with prospective managed services and server hosting providers to discuss detailed breakdowns of costs and fees.
Initial outsourcing costs surprise some businesses. It may take weeks or even months for an organization to successfully transition a production workload onto hosted servers with a full suite of monitoring and data protection -- such as workload replication and recovery -- services in place. All this time, you're still operating the legacy set up. Any issues that arise, such as WAN connectivity problems, hypervisor or management tool incompatibilities or other snags, lengthens this overlap and increases the upfront costs.
"Businesses need a clear migration and testing plan so they can minimize the [amount of] time that they're paying for resources that are not in production," Gorcester said. Once a workload's transition to hosted infrastructure is complete and vetted, be sure to consider when to repurpose in-house systems.