Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Growing cloud capabilities influence IT capacity planning

IT capacity planning must take into account the decrease hardware space requirements, ramping up densities and the potential to interface with cloud services.

The cloud, in many ways, is the trigger for the greatest level of changes to data centers in the remainder of this...

decade.

It is still a new development for IT to handle peak loads using public clouds; move noncritical tasks and all backup and archiving out to cloud service providers such as Amazon Web Services, Google and Microsoft Azure; and build internal private clouds. Hybrid cloud and density advances in hardware mean big changes for IT capacity planning.

The hybrid in-house data center and public cloud structure adopted around the industry enforces the use of commercial off the shelf (COTS) systems. While some vendors claim cloudiness in their proprietary products, the disconnects at the private/public cloud boundary will make life difficult. There is no safe place for mainframes. With software as a service packages replacing legacy in-house ERP for example, the number of data centers housing mainframes will drop quite rapidly.

That's a lot of space freed up, but there isn't an explosion of COTS systems to use it. The volume of data center IT infrastructure per COTS virtual machine is dropping rapidly, with virtual containers on the one side and new CPU and memory technologies on the other. These technologies should give system performance per cubic foot gains of as much as four to eight times in the next five years, with containers adding as much as four times more.

Even with a growth market for processing power in hardware, these changes will reduce server purchases and associated data center square footage needs considerably. The lifecycle of existing COTS gear will shorten to bring newer technologies into the data center, further increasing density and reducing needed space.

One might argue that the data center industry is on the tip of a huge growth in storage capacity, with big data and Internet of Things driving demand. This is a good scenario, but look at storage gear's evolution over the last five years and it's clear that this won't expand data centers much. In fact, flash-based solid-state drives (SSDs) have made compression a viable option, with raw data typically compressed to five times below its initial size.

In addition, storage drive capacity is growing. Expect SSDs with 20 to 30 terabytes soon -- the current capacity leader has 16 TB. That means a 60-drive, 4U cabinet will hold 1.8 raw petabytes: 9 PB with compression. Compare that with current storage area network boxes that offer 60 TB, using 1 TB enterprise hard drives. Data centers will need fewer appliances even with all that data growth.

This IT capacity planning scenario so far assumes growth in house in the owned data center or colocation space, but reality is that the public cloud is a strong alternative for this growth. We can expect large portions of the IT workload to migrate into public clouds over the next five years, as departments within organizations discover the agility and lower costs of renting IT. The bottom line is even more shrinkage for data center planners.

Cloud impact on jobs

While the long-term effect on IT admin jobs is profound -- fewer, more broadly skilled administrators handling more servers -- the transition from legacy IT infrastructure to automation in cloud environments will generate high demand for admins and software tools in the near term.

How to plan IT capacity in the new data center model

What needs to change in the near term as we look at the data center? The old economic models are dubious, so look at more aggressive depreciation. It's easy to make a case for four years versus the traditional eight years in a server's life. In reality, this faster refresh cycle holds true for storage and network gear as well. In fact, one could argue that a two-year IT infrastructure refresh cycle might be appropriate.

Look at the existing equipment a bit more closely during IT infrastructure capacity planning. Redundant arrays of independent disks are quickly becoming boat anchors. All-flash arrays, super-high-capacity drives, solid-state drives and the move to appliance-based data availability have made RAID obsolete. Analyst firm Gartner still sees high levels of enterprise hard drive sales in its tracking, which indicates a reluctance to migrate to newer, much better solutions and a poor understanding of the economics involved by IT capacity planners.

Virtual containers, as opposed to traditional hypervisor virtualization, reduce the number of servers needed for your current workload by significant factors -- up to five times denser server hosting, depending on the use. This means IT organizations can delay server purchases until free capacity is used up. Likewise, the compression feature in storage will reduce the need for new storage by roughly the same factor.

The realization that IT operations need not stand alone has been a tremendous psychological shock in IT, but the economics of the cloud approach are inexorable and the concept's achieving rapid acceptance.

In many cases, IT organizations deploy large numbers of drives to provide adequate IOPS, especially to host databases. In-memory processing reduces the total number of servers -- although it requires more expensive ones -- by factors reaching 100 times. Consider the opportunity to speed up core databases for your business' operation, while offsetting server purchases for other purposes. The net savings may pay for the new database machines and leave some money in the budget for another project.

All of these IT capacity planning changes are good preparation for the future data center. They'll save money and simplify operations. New operational practices with the IT infrastructure will be necessary to get the correct economic and operational models in place.

Do you need a data center?

In the next five years, IT organizations should ask: Do we need in-house computing at all? Small and medium-sized businesses are already shedding physical IT infrastructure, and this will be a dominant trend for SMBs by 2020. You only have to look at Amazon's hosting of small retailers to understand why this is inevitable.

Medium-scale enterprises typically have more complex decisions around IT capacity, commonly owning purchased apps and small data centers. Considering the learning curve and complexity of building a small cloud in house, these organizations might opt to go into public cloud for agility and ease of management.

Large-scale enterprises have different choices. These businesses can deal directly with the same vendors for IT infrastructure as Google and other hyperscale buyers. As a result, major enterprises run current and cheap data center hardware. It may well be that a private cloud is less expensive than buying services, given the lost efficiency of transitioning from public to private clouds seen in the hybrid model.

Next Steps

Nimble IT service brokers will modernize data centers

Ten data center Ops tips in 10 minutes

Determining data center design requirements

This was last published in January 2016

Dig Deeper on Data center capacity planning

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

When planning your IT capacity, where do you see the most room for improvement?
Cancel

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close