bluebay2014 - Fotolia
While most CIOs like the hybrid cloud approach, pesky realities are sneaking in issues that are anything but minor -- such as underinvestment in fiber connections by U.S. and some E.U. telco operations. Welcome to the network limbo that a cloud bursting architecture experiences.
A lack of bandwidth between the public and private cloud makes cloud bursting and agile repositioning of IT workloads just theoretical concepts. Why cloud burst under heavy demand if it takes hours to move the relevant data to the public cloud? Moreover, the cost of moving data in the systems configuration of a cloud burst architecture, measured in local area network (LAN) bandwidth and storage accesses, is huge.
This issue with network resources led NetApp and Verizon, for example, to set up a joint venture to colocate customer data to Verizon data centers to speed up public cloud access. The issue still remains for enterprises that want a cloud burst architecture: bringing that data to the in-house part of the hybrid cloud. Essentially, the problem of bandwidth still exists but is now in the local part of the hybrid cloud and not the cloud provider's domain.
There are four possible ways to resolve the weak cloud burst dilemma.
Forget hybrid -- go public
Everything the company runs in IT could move to the public cloud. Someone will invoke compliance to regulations such as the Sarbanes Oxley Act or Health Insurance Portability and Accountability Act, which is not a trivial issue. Colocation is acceptable in terms of level of control, because it means you own the IT infrastructure that lives in the multi-tenant compound, but a public cloud is a whole other multi-tenant beast. In 2016 we should see Amazon Web Services (AWS) and other public cloud providers offering either dedicated systems or long-term contracts within their own clouds, effectively creating a colo capability that should meet compliance regulations.
Colocation-in-cloud setups provide the necessary cloud bursting architecture, together with fast storage access, that makes for a truly agile cloud. The long-term nature of the machine ownership provides the configuration stability needed and allows for tighter security on enterprise data and systems, while keeping the agility to cloud burst when needed.
Get in on fast WAN
A second alternative is to colocate the private part of the hybrid cloud deployment to a telco or colo with fast wide area network (WAN) links. This allows local running of the normal workload using a fast LAN, with connection to fast fiber links to the big public clouds for cloud bursting or workload rebalancing, thus much better than an in-house cloud can typically achieve.
Burst on the inside
Another potential cloud bursting architecture is in your own data center, rolling everything in-house. Internal cloud bursting might be possible with careful scheduling and frequent rebalancing of loads, but this is not yet a well-automated process, so it isn't easy to implement.
Throw up a wall
Another option is to partition the data so that some subsets always remain in the public cloud. Consider this architecture if the data is transient, as with some classes of big data, and the boundary between the public and private portions could be dynamic in that situation, supporting cloud bursting. For example, a set of sensor data streams from retail stores could be sorted by customer name prior to reaching the analytics cloud, and dynamically split up by the first initial of the name.
The right architecture for your IT organization
Overarching all of these discussions, though, is the economic question. Is the public cloud cheaper than an in-house cloud? Can colocation save me money? These are actually more than pure pricing questions. Moving the workloads between the owned data center, colocation or public cloud options changes the performance dynamics of any solution, so IT jobs may run considerably longer in one environment over another.
Security is also an issue, though public cloud providers such as AWS invest heavily in counter-threat measures and tenant isolation. Often, rather than look at acceptable workload outsourcing, the discussion falls to myths and preconceptions around security.
Cost is the next issue. The complexity of run times factors into cost considerations. Storage access efficiency, virtual instance performance and other factors sway run time -- and the calculations will frustrate IT managers. If the idea of dedicated servers/instances in public clouds catches on, it should be possible to directly mimic the instances or even servers in in-house configurations. The issues of performance will most likely come from storage access speed. Here, running in house is probably faster, since storage data-flows can be tuned somewhat. Dedicated systems in public clouds could draw close to the in-house performance however. Access from a traditional colocation facility to a third-party provider is as fast as in-house operation for the local portion, but will suffer from latencies on the cloud burst portion in the public cloud.
The migration to any colocation scenario carries a long-term commitment. The typical contract term of one or two years will lock the IT organization to that colocation vendor -- a critical upfront decision. Even though the result may be cloudy, the buyer loses the ability to shop around vendors constantly competing on price. The colocation facility's carrier uplink quality is another factor when bursting to the public cloud. Quality is more important than variety of choice here, and one colo provider may have different uplink options in different data centers. In addition, the disruption caused by the change, moving and duplication of data; changing processes and procedures; and auditing governance and compliance are factors for the financial modeling of any cloud bursting architecture.
Telcos must begin the long-delayed roll out of urban fiber connections, even if it's too late to benefit IT shops that need cloud bursting options today. Hybrid clouds will most likely come in the form of colocation servers in the cloud, or completely hosted setups, by the time the network bandwidth issue is resolved.
It's unlikely a regrouping back to in-house data centers will occur.
Not all cloud discussions hinge on the hybrid structure. It is possible to place IT workloads all in public or all in private cloud infrastructure, and many IT organizations currently do. That precludes the agility of cloud bursting and probably costs a bit more in server capacity requirements, but it is still an acceptable paradigm for the large minority of larger cloud users that aren't yet planning hybrid clouds.
A blueprint for safer cloud adoption
Is the cloud, colo or a container right for you?
Why one online retailer moved out of the cloud