This content is part of the Essential Guide: Develop a solid virtualization capacity planning strategy

Essential Guide

Browse Sections

Mainframe tools for better monitoring, capacity planning and more

There are a variety of tools and strategies to optimize mainframe performance, from capacity planning to more efficient virtual storage management and more.

While the mainframe workforce may be shrinking, the need to optimize a mainframe system remains an important task for some IT pros.

To increase the efficiency of a mainframe, track workload, storage and processor performance. Admins can use different mainframe tools, including real-time monitors and IBM's Capacity Provisioning Manager, to accomplish these tasks.

Here are five SearchDataCenter tips to help streamline mainframe system management.

Adopt mainframe tools to track performance

There are three different types of mainframe tools that track performance: real-time monitors, near-time monitors and post-processors. Each type offers different benefits and diagnostic data, according to Robert Crawford, a systems programmer and TechTarget contributor.

Adding capacity to a mainframe isn't as easy as turning on another processor or adding CPU caps.

For a live mainframe view, use real-time monitors, which allow users to watch processes as they happen and to instantly diagnose and react to performance issues, such as those related to I/O or memory. However, be aware of overhead concerns; if implemented incorrectly, a real-time monitor could negatively affect system performance.

Near-time monitors strike a balance between real-time and historical analysis, and IT teams can use them to summarize data. However, near-monitors like IBM's Resource Measurement Facility Monitor III lack sortable data columns, and data is only shown in greater than 60-second intervals.

Lastly, post-processors allow IT to diagnose and analyze large quantities of data retrospectively. IT can track trends, use data summarization to plan for future capacity and debug past problems. The major concerns with post-processors involve time and volume; the large amount of data can be difficult to digest, and results aren't often available until the following day.

SMFLIMxx simplifies virtual storage management

Address spaces, abends and regions can complicate virtual storage management on a mainframe, but SMFLIMxx, a parameter library (PARMLIB) member, helps simplify that process with a rule-based method, according to Crawford.

The word REGION begins each statement in the PARMLIB member, and is followed by filters that describe the address spaces to which a rule applies. Filters can specify a job name, job class, user or subsystem. Attributes, such as MEMLIMIT -- which dictates the maximum amount of 64-bit storage available to an address space -- follow the filters.

Despite its benefits, admins need to be careful about changing settings on SMFLIMxx to ensure enough storage is reserved for certain tasks.

Perform capacity planning with the right mainframe tools

Adding capacity to a mainframe isn't as easy as turning on another processor or adding CPU caps. IBM Capacity Provisioning Manager (CPM) -- a mainframe tool available on z/OS 1.9 and after, according to Crawford -- assesses workload performance, and then automatically removes or adds capacity based on that information.

CPM monitors workloads through integration with Workload Manager. Admins can alter CPM policies based on workload type.

On the hardware side, CPM can add and delete engines following the release of z/OS 2.2. Still, IT pros should weigh the benefits and drawbacks of CPM, along with the potential financial constraints, before using the tool.

Don't rely solely on thread safety for mainframe performance

Thread safety provides a way to increase mainframe performance, especially for Customer Information Control System (CICS) DB2 applications. It does this by avoiding the switch between quasi-reentrant tasks and task control blocks.

Within CICS, programmers can pass control from language environment commands and native program calls, according to Crawford. Native calls are independent from their underlying environment, and avoid enclave creation to help boost mainframe performance. As a result, admins should consider the benefits of both thread safety and native calls when working with mainframe systems.

Vertical polarization optimizes mainframe processor performance

In an effort to optimize mainframe processor performance, without making it erratic, IBM has turned to vertical polarization.

Vertical polarization, according to Crawford, entails keeping production logical partition (LPAR) runs on the same processor to reduce the time spent loading and dumping the cache. IBM rolled out mainframe tools such as HiperDispatch to use vertical polarization techniques, along with various ways to measure the results. HiperDispatch and the mainframe hypervisor, called Processor Resource/System Manager, confirm that LPAR runs at a consistent pace on the processors. This preserves the cache and increases efficiency.

For the best results, determine the number of central processors an LPAR can use by dividing individual LPAR weight by total LPAR weight multiplied by the number of processors. Admins can download the LPAR Design Tool from IBM to help plan LPAR configurations.

Next Steps

Look out for common mainframe testing problems

New mainframe tools attempt to attract younger crowd

Hardware compression in z systems comes from IBM's zEDC

Dig Deeper on IBM system z and mainframe systems