Develop a solid virtualization capacity planning strategy
A comprehensive collection of articles, videos and more, hand-picked by our editors
Data center managers face a consistent challenge: the business demands more compute resources, but also wants to reduce power consumption, cooling and other facilities costs. When the time comes to evaluate necessary data center upgrades, a step-by-step plan can be a big money saver down the road.
Data center capacity planning is a major step in the right direction, and can help strengthen the relationship between IT and other areas of the business. And while IT concerns around big data and IoT -- and how those technologies will affect data center capacity -- are valid, they can be worked around.
Here are five tips to help you tackle data center capacity planning.
Ask the right questions before server upgrades and installations
Whether installing the first server in your data center, or doing a broader data center capacity planning project, asking the right questions, will go a long way in preventing potential issues down the road, according to expert Stephen Bigelow.
First and foremost, determine whether your facility is capable of handling an influx of servers -- both from a temperature and connectivity standpoint. Also consider the uninterruptable power supply (UPS) in your data center, because a new batch of servers could overextend the current UPS capacity.
After the physical components check out, examine software licenses. Each new server requires an OS, a hypervisor, management tools and other software. During data center capacity planning, determine whether you need to purchase a new license for these components.
Prepare for potential big data, IoT projects
As the internet of things (IoT) and big data grow in importance in the enterprise, more infrastructure is necessary to handle the increased workload. In data center capacity planning, IT teams should be aware of these applications, of compute, network and storage required to support them.
Data processing is at the root of both big data and IoT. Server clusters and scale-out architecture can support these workloads with boosts in memory, network and storage, according to expert Dan Kusnetzky.
Resource management is a key component of data center capacity planning for big data and IoT projects. Understand the limitations of your current infrastructure, and plan ahead for the resources you will need.
For example, adding more storage may seem like a quick, easy fix, but it doesn't always meet the new requirements necessary for big data and IoT projects. Bottlenecks could occur even with the storage upgrade, and in-memory databases could cause issues with power usage.
Use MIPS, MSU to measure mainframe capacity
CPU hours, MIPS and MSU are all metrics that focus on mainframe capacity.
MIPS, or million instructions per second, measures mainframe compute performance and the workloads it can handle. The more MIPS, the higher the capacity. On the other hand, MSU is usually used to calculate software licensing costs, according to expert Robert Crawford.
When converting CPU hours to MIPS to get a better look at mainframe capacity, use the formula: CPU seconds x equivalent uniprocessor MIPS (EMU)/Elapsed Seconds. The resulting number provides a mainframe workload capacity in a MIPS metric.
Consider how cloud, containers affect capacity
The cloud continues to drive change in the data center. Some IT teams choose to build a private cloud, and then handle peak, noncritical workloads in the public cloud.
On-premises growth is to be expected, but the public cloud provides another outlet to grow, if a business wants to save space in its data center, according to expert Jim O'Reilly.
Meanwhile, some IT teams are moving toward shorter infrastructure refresh cycles for servers, networks and storage. In some cases, they delay a server refresh because virtualization containers expand capacity.
Further down the road, question whether an on-premises data center is even necessary or cost-effective, as hosting environments could also meet business demands.
Combat mainframe unpredictability with CPM
Increasing mainframe capacity is no small or easy task. Adding CPU caps or another processor won't fix the problem, and could instead create more challenges, according to Crawford. When it comes to IBM mainframe tuning, tools like Capacity Provisioning Manager (CPM) in conjunction with Workload Manager (WLM) will help identify issues. In z/OS 1.9 and after, CPM allows IT teams to automatically add or delete capacity based on application performance. CPM interfaces with WLM to monitor workloads, and measures performance metrics.
While CPM has its benefits for mainframe capacity planning, implementation may prove too costly to justify its use.
Capacity management can be feast or famine
Storage is at a premium with big data and IoT
New data center metrics for gauging efficiency