Approaches to data center capacity planning: Advisory Board Q&A

Solid technical knowledge, comprehensive tools and a keen business sense are all needed for solid data center capacity planning.

This article can also be found in the Premium Editorial Download: Private Cloud: Cloud capacity planning for the future and navigating cloud SLAs:

Data centers are never static. They grow as new workloads and more users are added over time. But data center capacity planning doesn’t happen by accident. Getting the right computing resources into place at the right time takes solid planning and a keen business sense. This month, we’ve asked the SearchDataCenter.com Advisory Board to share their overall strategy for data center growth, their approach to growth forecasting, their take on growth alternatives (such as virtualization or cooling temperatures), and their best business-side argument for growth.

Bill Kleyman, virtualization architect, MTM Technologies Inc.
Acting reactively when allowing for growth may cost an environment more than a proactive stance, so advanced planning definitely pays dividends.

Virtualization and better storage utilization have helped organizations get a better handle on their data and capacity needs. For example, workflow automation helps environments gauge needs and spin up servers as needed. This type of virtual environment allows for rapid provisioning and de-provisioning of viable workloads, so administrators have the capacity to spin up new machines as needed using existing hardware.

There is little scrambling in this type of scenario since the environment is already prepared for a sudden boost in usage. Storage can be used in a similar fashion where smart technologies are incorporated into the data center. Thin provisioning allows administrators to use just the right amount of storage for a virtual machine (VM), and solid-state drives (SSDs) and flash memory help with I/O problems. NetApp has developed a great onboard card utilizing intelligent caching. This 1 TB card, called FlashCache, is able to offload workloads off spinning disks. This, in turn, allows storage engineers to use their space more intelligently. Over-provisioning storage can be very expensive.

Upper-level IT managers must have their data center forecasting views directly in line with the goals of the business. By understanding a company's business growth strategy, IT executives are able to better forecast growth within their data centers. Is there an acquisition coming? Is there a push for new customer relationship management software? Is there a need for a more distributed data center environment? Monitoring existing trends within a data center is also important. When an IT environment is well documented, monitored and kept up to date, it's able to grow and expand much more robustly.

And any infrastructure must be able to handle sudden change. When working with internal systems, never max out your computing resources. By having data center capacity available (storage, licensing, servers, VMs, etc.), an environment is able to quickly adapt to daily business fluctuations.

Data center capacity planning will always depend on the needs of the business. However, there are three major options to help with sudden capacity increases or even long-term planning.

By using VMs, companies are able to reduce their hardware footprint and run more efficiently. With fewer hardware components, we usually see a reduction in data center management and cost. Bring-your-own-device initiatives also help reduce costs since administrators no longer need to manage end-point devices. Data, in this case, is stored centrally and can be accessed anywhere, any time, and on virtually any device. This also helps with security, since a lost device is no longer a liability. Thin clients operating with virtual desktops allow companies to reduce their reliance on large desktop PCs. In small environments, the difference might be minimal. However, imagine removing 2,000 desktops or more and replacing them with a small footprint like a thin client. Resource-intensive user groups can have dedicated resources assigned directly to them. For example, administrators can assign a physical graphics card to a selected VM.

Second, many organizations are beginning to see how the cloud can be beneficial for data center capacity planning and management. Large organizations have offloaded entire workloads to the cloud and no longer have to manage physical hardware at all. Amazon's EC2 services create a very attractive price point per server, where organizations can see a tangible benefit in moving to that type of infrastructure. Some companies have a hybrid solution where some architecture is

left in the data center while, for example, all test and development is done in the cloud. By eliminating hardware, we see cost saving in licensing, data center power and cooling costs, and overall infrastructure management. As cloud technology continues to evolve, it also becomes easier to manage users. Tools like Citrix's Open Cloud Access allow for easy identity federation to occur, allowing users to authenticate to Web-based applications using only their Active Directory credentials.

Third is storage, which is one of the most over-provisioned and sometimes expensive elements in any environment. Purchasing extra disks to try and solve various problems like I/O is more of a Band-Aid than a solution. Be smart with storage. EMC's Clariion storage area network uses SSD drives, which can accept dedicated workloads known to be I/O intensive. This type of technology can reduce the need for extra disks. Also, as mentioned earlier, NetApp's FlashCache can help by offloading heavy virtual desktop infrastructure (VDI) workloads and allow existing spinning disks to serve more users.

The business case is always tricky. IT managers can push for new technologies to be delivered to their data centers – even in this economic environment. The trick is how it's proposed to the managing board. As IT professionals, we understand all of the cool features within a certain product. However, this isn't what interests executives outside the IT field. A company looking to save costs can do so with smart technology purchases.

For example, old servers are expensive – they cost more money to manage, parts are pricier and their density isn't as good as what is currently available on the market. IT managers must show short-term and long-term return on investment with any technologies they wish to bring into their environments.

Consider how a Cisco UCS Blade Chassis can help save a company money: By removing older workloads and aging servers, we are able to move applications and users to a smaller but more robust environment. There is an immediate cost savings in power reduction, data center space utilization and ease of manageability. Long-term savings come in the form of fast hardware provisioning and allowing an IT department to better cope with fluctuations in the business strategy. Other examples involve deploying VDI to remove a massive desktop hardware footprint; incorporating cloud-ready technologies to better scale a business while keeping costs down; or using virtualization to remove older workloads from the data center.

Matt Stansberry, director of content and publications, Uptime Institute
In a business environment with growing IT demand and a rotten economy, data center capacity planning can be difficult. Even companies that may have recently built new data centers are running out of space, power or cooling and are turning to third-party IT service providers to meet growing IT capacity demand. Companies may use multiple enterprise data centers, outsourced hosting providers and even cloud services to meet different business objectives. It’s a challenge to manage this disparate portfolio of infrastructure assets.

Infrastructure assets are widely spread across multiple organizational and geographical boundaries. Misinformed executive-level decisions, which are unfortunately rampant in our industry, can be traced back to a lack of correct and complete information. Data center managers need cross-disciplinary skills and insight into other departments to make effective infrastructure capacity investments.

Steve Carter, vice president of Uptime Institute Digital Infrastructure Services, said IT managers need to forecast at least six months out, and suggests that data center capacity planning can be accurate out to three to five years and longer.

“You can plan even further out by aligning capacity planning with your company’s business plan,” Carter said. “If your organization is planning to move into new global markets, or mergers and acquisitions, you need to factor those activities into IT capacity planning. It’s an educated guess beyond five years, but at least you can offer your executives a range of options.”

The first step for an IT department facing capacity constraints is to take account of current technologies and utilization, and then extend data center life by increasing virtualization, thin provisioning and implementing well-known and accepted best practices on the IT side.

“According to The Info Pro, the average computational utilization of servers is around 15%,” Carter said. “Just moving that number from 15% to 30% can remove constraints in space, power and cooling. Virtualization, consolidation, technology refresh and data storage optimization – these are the ways to extend the life of your data center.”

That may be easy to say, but hard to fund in this economy. IT managers need to explain to C-level execs holding the purse strings that they will be spending more money regardless – and it’s better to spend smart.

Bill Bradford, senior systems administrator, SUNHELP.org
In terms of physical facilities and data center growth, we've been moving toward using fewer physical resources as older systems are retired. A policy of "no physical dedicated servers unless absolutely necessary" is in place, so a lot of new hardware being deployed is meant for virtualization. We might have more VMs on the network, but fewer physical machines in the data center.

We have tools for growth forecasting. VMware vSphere/ESXi is our main virtualization platform, and its Capacity IQ reporting and analysis tools are great for looking at past and current trends, as well as “what if” scenarios, to see if known upcoming workloads will fit on our current hardware – or if additional capacity needs to be purchased.

We are cautious of growth and the cloud. Cloud is a nice marketing buzzword, but a lot of companies don't want (or aren't able) to entrust their data to a third party. Even the mighty Google and Amazon have outages, and a lot of upper management would prefer to have their own staff in direct control of the infrastructure and resources rather than being just another customer to a service provider.

As for making a business case, I'm lucky to be far enough down on the food chain that I don't have to worry about money for growth – someone a couple levels above me gets to handle the planning for that. I'm still in the trenches with the folks that implement technologies and roll out hardware, and I like it here.

Robert Crawford, lead systems programmer and mainframe columnist
Capacity planning has always been important for mainframes because they are a large expense that must be carefully managed. Fortunately, mainframe tools that produce and report on resource utilization are very mature. The utilization data can also be fed into expert data center capacity planning software that renders relatively accurate forecasts for what a shop needs and when they need it.

An enterprise can stay ahead of data center growth using these forecasting tools. For example, a good plan might be to use a two-year forecasting horizon that plots actual usage during critical days against projected capacity. Capacity planners should also use statistical analysis to figure out a margin of error and account for that possible error in their planning. Note that each shop must decide for itself what defines "critical" days and whether to plan for peak or average utilization.

However, surprises and unbudgeted workloads do arrive. In general, depending on the margin of planning error, an enterprise can absorb unexpected workloads by keeping a little headroom over their actual needs. If push comes to shove, a data center may activate a mainframe’s "banked engines" that it already owns but has turned off. Either of these strategies should work. What a mainframe shop wants to avoid more than anything else is an unscheduled upgrade that involves buying a new book or entire processor.

Robert McFarlane, data center design expert, Shen Milsom Wilke Inc.
When it comes to data center growth, the most important strategy that we advise our clients to follow is modularity. Everyone wants to plan a facility that will last 15 or 20 years, but no one is foolish enough to actually claim that he has an idea of what’s really going to happen more than five years out (if that). Thankfully, we have many tools these days that enable us to grow things like uninterruptible power supply and cooling capacity in realistic increments (and without "forklift upgrades"), and also let us do it with a great deal of flexibility.

Flexibility is not just about the physical location of equipment, it's also the ability to take advantage of different technologies as necessary. An example might be the installation of over-sized header pipes on a chiller plant, with a number of strategically located taps that could be used for in-row coolers, rear-door coolers, direct-cooled servers, and so on. Technologies like power busways provide a similar kind of flexibility on the power side. No particular solution is right for every situation, but the right use of what we have available can make life a lot easier down the road.

When forecasting growth, we ask clients for as much historical data as they can provide and any business or usage trends that either have or are anticipated to affect the business. For educational institutions, it's the research programs that tend to have the greatest impact. We then overlay that with consolidation trends for space, and density trends (from ASHRAE TC 9.9) for power and cooling. It's far from an exact science, but it's better than simply accepting someone's statement that he wants to build “X” square feet of space with no basis for how he got there.

There are alternatives to growth. Virtualization is the hot topic today, but it will reach some kind of a limit, or at least a flattening of the curve. Since I deal with the facility end of things, I have to encourage the consideration of higher operating temperatures, but I also recognize the importance of making that decision with full knowledge, information and understanding. Just increasing computer room air conditioning set points may not be advisable. Cooling takes space and costs money to install and operate. Retrofitting existing data centers for "free cooling" can be difficult and often impractical, but new data centers should include it, because it will soon be required. That means designing and operating at higher temperatures to take maximum advantage of the free cooling technology for as much of the year as possible.

Business cases can be tricky. Today, C-level executives require real justification before releasing the money it takes to build or renovate a data center. And if it's done without sufficient budget, in most cases it's really not worth doing at all. That means understanding industry trends and doing real studies, not just extrapolating growth on a spreadsheet. I'm not qualified to advise on "creative financing," but designing modularly will reduce initial capital investment as well as operating expense, and will be a big step toward making the investment worthwhile in the long term.

Robert Rosen, CIO, mainframe user group leader
Growth is an interesting problem. Since we are in a real estate-constrained situation, our growth strategy is based on ever-increasing efficiencies like virtualization. Our big problem is storage, since there is little you can do to decrease the size of physical storage. We are also weight limited. As we get denser, we get heavier. That is a dead-end path unless vendors offer a helium option. We are currently looking at cloud storage as a solution.

Planning is straightforward. We do look at trends and use a variety of tools (PerfMon, Tivoli TPC) and we actually do a variety of statistical projections to get a feel for the growth. We use a two-year window, but we look back as far as we have data to see if there are anomalies.

In terms of alternative technologies, we see virtualization to limit server growth (scale up rather than scale out with blades); higher temperature use (our parent agency has mandated a 78 degree Fahrenheit input temperature); and we’re looking at cloud storage.

Ultimately, the data speaks for itself in a strong business case. We say, “Increase or here are the business impacts (deleting older data, etc.).” You have to give executives alternatives and show the impact. Usually the business impact is sufficient justification.

Stay ahead of the curve
There is simply no substitute for thoughtful data center capacity planning based on accurate data gathered with the proper tools and examined from the context of business needs. With that foundation, IT managers can stay one step ahead of their data centers’ growth needs and deploy other important technologies like virtualization.

This was first published in November 2011

Dig deeper on Data center capacity planning

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close