Managing CICS performance with Workload Manager

Workload Manager is a useful tool for easily managing CICS performance on the mainframe. This tip details how CICS and WLM interact with one another and how to use WLM to manage CICS response times.

Workload Manager (WLM) has been a blessing for the mainframe. Before WLM there was Systems Resource Manager, which

was full of cryptic parameters with vague interactions between them. Back in the '80s, a systems programmer might agonize for weeks before making a simple change, only to become reacquainted with the law of unintended consequences. WLM simplified things quite a bit, enabling the performance analyst to set goals and letting the system manage them.

How WLM Manages CICS
WLM can manage CICS as an address space or as a server. In address space mode, WLM keeps an external view of the region and manages to "velocity" goals, which define the acceptable delay for a workload as a percentage of its execution time.

WLM establishes the velocity of an address space through sampling. Once it collects enough samples, calculating the velocity becomes a simple matter of dividing the number of times the task was active by the sum of active samples plus the times the address space was delayed by WLM-managed resources. WLM then applies the calculated percentage against the goal.

Response-time goals tend to make more sense for CICS. Here, the performance analyst sets percentile (90% transactions under two seconds) or average response-time goals and WLM tries to ensure the region has the resources necessary to meet it.

A performance analyst can assign CICS both response-time and velocity goals. With that type of policy, WLM manages CICS to the velocity goal until the region starts reporting on individual task performance. At that juncture, WLM switches to response-time goals and ignores velocity.

CICS and WLM interaction
WLM needs to know what's going on inside of CICS if it is to manage response time. CICS connects to WLM at startup and allocates one performance block (PB) for each max task (MXT) slot, plus some left over for system tasks. The number of PBs can be significant. To gather performance information, WLM scans the PBs every quarter second. Setting MXT to the maximum of 999, convenient from an administrative point of view, will cause a lot of overhead as WLM tiptoes through the PBs for dozens of regions four times a second.

Each transaction must be assigned a service class for WLM to manage its performance. Thus, when CICS attaches a new transaction, it issues a classify WLM macro. WLM selects a service class for the unit work based on several criteria, including transaction ID, the CICS APPLID, originating LU name or even the user ID (in case the CEO logs on). The service class defines the performance goal for the transaction. If WLM doesn't find a matching service class, it assigns a default. Once classified, CICS can let WLM know the task is on its wait through a start request.

A transaction that participates in dynamic routing or function shipping doesn't get reclassified every time a task starts on another region. Instead, CICS passes the service classification over multi-region operation links and CICS issues a macro informing WLM the task is a continuation of a previously started unit of work (UOW).

As an aside, this global view of a UOW is a nice piece of technology. Managing response-time goals wouldn't make much sense if WLM couldn't track a UOW's progress through a CICSPlex. After all, how would boosting an application-owning region's priority improve an application's response time when a file-owning region is delaying it?

As each task in the UOW completes, CICS issues a notify macro to tell WLM how things went.

Another feature of WLM, dynamic workload management, re-allocates resources to meet performance goals. Every 10 seconds WLM examines the collected performance data to see if anything needs jiggling by calculating a "performance index" for each service class. If the number is less than one, the service class is meeting or exceeding its performance goals. If the number is greater than one, some adjustment may be necessary.

Ultimately, WLM picks one service class out of the group that isn't meeting its goal and adjusts an address space's access to the resource it thinks is causing the delay. It may also change some other workloads if it has to rob Peter to pay Paul. After making the changes, WLM goes back into collection mode until the next 10-second interval.

IBM designed the 10-second delay and one update per cycle into WLM to avoid thrashing. After all, most customers want their computers to do more than adjust resource allocation. It also serves to smooth out system performance instead of jerking things along.

This smoothing strategy has some downsides, too. Sometimes, if a workload like CICS gets into a hole, that 10-second wait for an adjustment can cause problems (recognize the voice of experience here). The overall impact, of course, depends on the volatility of the workload and its sensitivity to periodic changes in performance.

ABOUT THE AUTHOR: For 24 years, Robert Crawford has worked off and on as a CICS systems programmer. He is experienced in debugging and tuning applications and has written in COBOL, Assembler and C++ using VSAM, DLI and DB2.

What did you think of this feature? Write to SearchDataCenter.com's Matt Stansberry about your data center concerns at mstansberry@techtarget.com.

This was first published in August 2010

Dig deeper on Mainframe operating systems and management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close