Effective data center capacity management requires the right mix of tools, IT know-how and business savvy.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Successful businesses have the agility and flexibility to quickly bring new workloads online and scale them on-demand. Data center capacity planning ensures that business units have the right computing resources at the right time without breaking the bank. Yet IT budgets seem thinner every cycle, and capacity management is a murky mix of art, science and luck.
Combining software tools, technical talent and business savvy can steer a data center not just to right-size operations, but into ever better performance.
Algorithms and brainpower
Tools are the foundation of data center capacity planning. They must monitor central processing units, memory, storage and network capacity and track each of these resources over time to model future requirements. Tools should also produce clear recommendations that help data centers walk the line between spending capital needlessly and waiting so long to acquire resources that their service suffers. There are myriad options: ManageEngine's Applications Manager, CiRBA's Capacity Control Console and many others.
An auto rental company's data center, for example, currentsly uses half of its 20 TB storage capacity. Reports show that storage use is growing an average of 1 TB per month. Without action, storage will fall critically low in less than 10 months. The difficult task is determining usage and spotting patterns, which a good tool can identify. A comprehensive proof-of-principle project will show if the software is adequate for the tasks at hand.
Even the most refined and comprehensive tool is useless by itself. It takes a combination of IT know-how and business insight to successfully manage capacity. It offers details, tracks historical usage and predicts future resource needs, but only if IT tunes it for the specific environment and works with the business to identify key growth areas versus red herrings.
Data center capacity planning guidelines
- Report, report, report: Capacity planning is useless unless resource data is evaluated regularly, so take full advantage of the capacity planning software's reporting capabilities.
- Take warning: Don't wait for regular reports to look for shortages. Use warning features to alert the right staff when resources become low.
- Check scope: Capacity planning tools should cover all systems for an accurate view of resources, especially in a heterogeneous data center environment relying on third-party capacity planning tools.
- Tie in to chargeback: Organizations that adopt IT chargeback mechanisms can use capacity planning tools to highlight usage trends, giving business units better budgeting information.
- Call in the suits: Capacity planning is not just an IT function; it must fit with other business planning and budgeting processes to ensure that money is available to add capacity and adopt cost-saving technologies.
If the auto rental organization acquires a competitor, for example, resource growth will spike substantially long before current storage is exhausted. But not every usage spike indicates a trend. If the business rolls out a new application one month, storage use might spike by 3 TB. This is no cause to revise monthly usage predictions. A capacity planning tool can't make the distinction without IT and business leaders' direction.
Capacity management vs. the VM sprawl
Capacity planning focuses on resource usage, historical trends and future modeling -- not on the value or benefit derived from those resources. For example, a capacity planning tool shows that a server is 90% utilized, justifying another server buy, but there is no way for the tool to know which of those virtual machines (VMs) are just sitting idle from forgotten projects.
Capacity tools can focus on wasted computing resources by reporting utilization trends. For example, if a server typically runs at 60% capacity month over month, but suddenly climbs to 80%, it may have taken on more workloads from other systems, or new workloads. If you cannot reconcile changes in usage with known IT activity, investigate for unauthorized use or any suspicious activity.
Understanding which workloads are present, what they do, who owns them, when they were created and when they should be removed (if ever) is all part of workload lifecycle planning -- part of the suite of systems management processes that a virtualized data center should adopt for workload optimization.
What capacity data is trying to tell you
Attentive data center capacity management reveals opportunities to optimize resource usage and add value to the enterprise.
Heavy network bandwidth between servers suggests a need for workload balancing: If a database on one server exchanges a lot of traffic with an application on another server, reduce congestion by putting both VMs on the same server where the application can query the database without going out to the local area network.
Data deduplication can mitigate rapid storage growth. Forestall expensive high-performance storage investments by introducing tiered storage, assigning lesser-used data to inexpensive serial ATA disks or archival media.
Watch server usage patterns, not just capacity. Computing growth leads to more servers, but instead of adding a lot of boxes, you might be better off adopting fewer high-capacity systems with multiple processor sockets and ample memory that are more energy efficient. Alternatively, outsource some low-priority workloads to a cloud infrastructure or managed service provider instead of investing in hardware.
Capacity trend information will inform upgrade budgets and logistical planning, and even opportunities for technical innovation.
Stephen J. Bigelow asks:
What's worse: Overprovisioning and blowing out your budget or underprovisioning and juggling workloads around?
1 ResponseJoin the Discussion