This article is part of an Essential Guide, our editor-selected collection of our best articles, videos and other content on this topic. Explore more in this guide:
3. - Previous Advisory Board Q&As: Read more in this section
- Advisory board: IT project planning for quiet days
- Advisory Board weighs in on IT job requirements for new hires
- Advisory Board: Cloud's effects on data center design principles, locale
Explore other sections in this guide:
Cloud computing has a real effect on data center design principles. The SearchDataCenter Advisory board considers how cloud computing affects data center location, and the financial and resource availability issues associated with a move to the cloud.
Clive Longbottom, co-founder, Quocirca
Cloud computing's influence on data center design principles is still emerging. Companies implementing a private cloud platform should have undergone rigorous application rationalization, hardware virtualization and consolidation exercises. This may have left them with around 20% to 40% of the original data center equipment -- even though the facility was built to power and cool much more. Power usage effectiveness metrics can go through the roof unless corresponding changes are made to the facility and to the equipment used. The remaining private cloud equipment may have higher densities and different cooling needs.
On top of this, as organizations move more workloads from their private cloud or traditional on-site facilities to colocations or public clouds, flexibility becomes even more important. With WAN capabilities improving, data center location now comes down more to data privacy and legal requirements than to the performance of the network -- except in industries such as investment banking, where every millisecond counts.
The key for new data center designs is to be flexible -- ensure that power supply and distribution, uninterruptable-power-supply provisioning and cooling are not monolithic, but can grow and shrink with the needs of the IT equipment.
Modern data center infrastructure management , or DCIM, and other management tools can help provide predictive capabilities, determining the issues that could arise from changing workloads or equipment. Application performance predictions can also help in the decision about whether a workload should remain in the owned data center or be pushed out into the colocation or public cloud.
Pete Sclafani, CIO, 6connect
Overall, cloud computing simply provides more options for data center design. As more cloud providers jump in and build out new services, they drive growth in data center builds. Cloud also prompts more efficiency in new data center builds and retrofits.
Companies and end users have more flexibility when it comes to data center designs and facility selection. This growing pool of cloud services makes room for more complex services that were previously out of reach. For example, what if a company wants to implement a basic disaster recovery plan? Having something as simple as an off-site backup is much easier when you can leverage existing data center infrastructure to support a cloud service, versus having to commit to the capital expenses of building and outfitting your own backup facility.
Newer, high-efficiency designs and locations also lead to more competitive pricing options and geographic possibilities. Transport logistics are still a factor, though, so look at the best combination of price and performance for your company's needs.
Done correctly, a cloud computing project allows a company to test out new services without risking significant capital expenses or business operations continuity.
Robert Crawford, systems programmer
In my world, cloud computing hasn't had as much of an impact on the physical data center as on the way we design and administer our systems.
Cloud computing allows us to clone and, to a certain extent, make production systems interchangeable to business users. This setup, along with IBM's parallel Sysplex, enables enough redundancy and operational ease to avoid problems. We also closely monitor our logical partitions and assign resources on demand to meet the needs of any given workload, which makes us agile.
The test-and-development environment is a more difficult case for cloud because programmers must be environmentally aware to support parallel development. However, we can provide the developers with some cloud services and self-service capabilities to make their jobs easier. Examples would be blank CICS regions that can be "checked out" on demand, and the ability to bounce information management system message-processing regions.
Sander van Vugt, independent trainer and consultant
Cloud computing adoption has stimulated some huge changes in the way data centers are created. Now that servers are no longer just physical machines but virtual instances strung together by a combination of infrastructural parts, it doesn't matter where the data center is physically located.
Data centers are not necessarily located at company headquarters anymore, consuming valuable floor space and creating power supply problems. Data centers can now be created in remote areas, where they practically run themselves. That means that choosing the right location for the data center depends on completely new factors. For instance, why bother installing your new data center in the middle of the desert, where large cooling systems are needed, if you can install a data center high up in the mountains?
The rise of cloud computing -- public and private -- and the increase in WAN bandwidth makes it so data centers can be constructed anywhere. This not only helps decrease the price of the data centers, but also helps bring an economy to formerly underdeveloped areas. A modern data center doesn't require much on-site support staff, but you will need some people present, even if the data center is in the remote Alaskan tundra.
Wayne Kernochan, president, Infostructure Associates
Cloud computing or internal data centers is not typically an either-or choice for CIOs or IT strategists. Instead, staff must consider the incremental effects of cloud computing availability -- private or public -- on current consolidation mandates, and therefore use data centers with broad geographical spread.
Private cloud implementation is often the first cloud foray for large organizations. Private clouds are created on a retrofitted existing architecture, with no data center location alteration or, for that matter, physical design change, since the private cloud's virtualization extends the life of existing machines and networks. However, once the software becomes virtualized, further physical consolidation becomes attractive.
Since any software acquisition -- even cloud -- adds costs, typical budget-obsessed IT shops tend to defer cloud adoption unless there is corporate backing. Because large vendors with strong track records typically offer private cloud products, implementation efforts rarely engender availability concerns.
In the case of follow-on hybrid cloud implementation, however, financial interests argue for cloud implementation, while availability concerns argue against it. Cloud providers have made an effective case that they run corporate workloads cheaper, but the newness of dealing with public cloud providers means businesses start with concerns about application availability that are difficult to eradicate, especially in the case of mission- and business-critical apps.
As a result, most large IT shops offload either test cases for new tasks or existing non-critical apps to public clouds. Rarely do the key, run-the-business apps migrate into a public cloud architecture. Public cloud use often has less effect on existing data center design principles and locations than a private cloud implementation.