Depending on whom you listen to, the impact of cloud falls somewhere on a spectrum of "very little -- I wouldn't touch it with a pole," to "complete game-changer -- this signals the end of the IT department." Just where along this spectrum is reality?
Unfortunately, it's not that easy to provide a definitive answer, since the correct place for your organization along this spectrum is dependent on so many variables, such as your own organization's risk profile, the volume of in-house applications and the age of an existing data center facility and its contents.
When looking to create the optimum private data center facility for IT, it is best to start from the perspective that flexibility is everything. It makes no difference if cloud is going to be the biggest thing to hit your organization or not; any lack of flexibility in the data center could be catastrophic to the business.
If businesses start from a laissez-faire position and let everything continue "as is" in the data center with little change to the IT architecture, the data center still has to be able to grow as the business grows. If the current facility becomes a constraint, then it will have to be replaced -- not the best option under current financial conditions. Therefore, something else has to provide the flexibility that IT and the business demand.
Virtualization is an option for lowering the amount of equipment needed to manage current workloads -- often by 50% or more. This sharp drop in the amount of IT equipment under management may sound great, but if this is all housed in the middle of the same sized facility without any changes to uninterruptable power supplies (UPS), backup generation and cooling, the data center will not be optimized and power utilization effectiveness (PUE) values will climb through the roof.
The counter to this is that virtualization is a one-time fix. Once it is in place, the growth of IT equipment and the space it needs is likely to occur again -- unless new workloads are pushed out of the facility and moved into a public cloud.
The logistics of moving to cloud
So just what does a shift to the cloud mean to how the existing facility is architected? Cloud computing is still in its early years; although the theory is strong, in many cases the practice leaves a little to be desired. Building a data center that shrinks as workloads are pushed out is one thing, but if the workloads are to be brought back in due to a cloud provider not meeting requirements or going out of business, how will it be possible to grow the private facility rapidly enough to embrace this? Even where the cloud provider meets its responsibilities, what else needs to be done within the facility to ensure the end user experience is good enough for continued use?
The first thing is to move to a more modular off-the-shelf IT equipment model, using the likes of Cisco UCS, VCE Vblocks, IBM PureFlex or Dell vStart systems, rather than build-your-own racks. Although "build-your-own" may sound like it provides greater flexibility, the actual speed of response to business needs is often compromised, whereas the preconfigured modules in modern systems can be put in place and provisioned rapidly and more resources can be added as required in a very effective manner.
Greater modularization also enables the use of a more structured approach to other areas of the data center. Using hot and cold aisles in the private data center allows more targeted cooling. When used with higher temperatures and variable speed, computer room air conditioning units -- or even free air cooling -- as well as resized and more granular UPS and generation systems from the likes of Eaton, Emerson or Schneider, there is greater control over how the facility and the IT equipment work together.
To optimize space usage within the facility, false walls can be put up for offices or other business space -- but remember to ensure that these walls fit from subfloor to raised ceiling to stop any leakage of cooling air through gaps above the dropped ceiling or beneath the raised floor. However, the capture of hot air from cooling systems can be easily used within colder climes for space heating in these walled-off areas, or can be improved using heat pumps for water heating in hotter climates.
To ensure the end-user experience is optimized for a hybrid cloud environment, it is recommended that wide area network acceleration or optimization systems are investigated so that the latency across a much more distributed IT platform is minimized. Here, vendors such as Silver Peak, Riverbed and Cisco offer hardware and software services that can ensure that performance of applications is maintained across long distances.
If a move to cloud computing is in the works within the private data center, then it will also be important to ensure that applications are architected correctly. Any network traffic should be kept as much within the facility as possible. This can be done through virtualized desktops, with the business and presentation logic essentially co-located, and only the visual aspects of the interaction being presented to the user's device. Not only does this ensure the best levels of performance, but also enhances security by keeping all storage centralized.
Embracing cloud computing means that IT and the facilities group must work more closely together. Creating a cloud architecture without addressing the facility will result in PUE values that could negate the capability for an organization to claim that it is aiming for sustainability in its computing. For a facility to be "built for cloud," not understanding the dynamics of the IT involved and the likely strategy for outsourcing and insourcing workloads will result in a data center that does not have the flexibility to truly support the business.
On the spectrum of adoption, moving to cloud computing is likely to cause a continuous downshift in the focus of computing within a private data center -- but only over a period of many years. Bringing IT and facilities together will ensure flexibility at the right speed and capability to support the business.
ABOUT THE AUTHOR: Clive Longbottom is co-founder and service director at Quocirca and has been an ITC industry analyst for more than 15 years. Trained as a chemical engineer, he worked on anticancer drugs, car catalysts and fuel cells before moving to IT. He has worked on many office automation projects, as well as Control of Substances Hazardous to Health, document management and knowledge management projects.