Problem solve Get help with specific problems with your technologies, process and projects.

Configuring DBCTL and virtual storage

Get more from your CICS and IMS database interface. CICS exert Robert Crawford discusses how.

Despite the primacy of DB2, many CICS shops still use the old warhorse IMS (Information Management System). The DB Control (DBCTL) interface between CICS and IMS/DB is generally a "set and forget" affair, but does bear occasional revisiting, especially as enterprises cram more workload onto fewer LPARs. This will be a short treatise on how to configure DBCTL and the virtual storage it uses.

Creating the DFSPZP load module

More on CICS and IMS:
How does DB2's performance compare when using CICS rather than IMS? 

IMS meets the Web!

The Database Resource Adapter (DRA) start-up table configures the DBCTL interface with macro DFSPRP. When assembled, DFSPR creates a load module by the name of DFSPZPxx, where "xx" is a numeric suffix. This is the same suffix specified in transaction CDBC or the INITPARM system initialization table (SIT) parameter for DFHDBCON. The suffix gives you the ability to specify different DRA parameters for different CICS regions if, for instance, you run test and production in the same LPAR. Technically, DFSPZP describes how a Coordinator Control Region's (CCTL) interfaces with IMS/DB. For our purposes, CICS is the CCTL.

Fast path buffers and virtual storage
The number and size of fast path buffers probably has the biggest impact on storage. Not only do these buffers come out of extended common storage area (ECSA), they may be page fixed. Even today ECSA is a finite resource jealously hoarded by the performance guys, especially as it puts pressure on CICS' and DB2's private areas. It also limits the number of IMS/DB or IMS/TM instances that can run on one LPAR.

The following is a list of the DFSPZP parameters that control fast path buffers:

  • CNBA – The buffer allocation for each CCTL (CICS region)
  • FPBUF – Number of fast path buffers for a CCTL thread
  • FPBOF – Number of fast path overflow buffers per thread
So we can assume that CNBA represents the number of buffers allocated to CICS when it connects to DBCTL. Then, as threads are created, IMS allocates FPBUF buffers out of the CNBA pool. If a thread uses up its buffers IMS allocates a number of overflow buffers from a pool until the thread reaches a SYNCPOINT. Note that the overflow buffer pool is serially shared by all the CCTL (CICS and non-CICS) regions in the LPAR. Therefore, dipping into the overflow buffer pool too often will impact performance as each thread waits its turn.

There are two other things to consider before setting the above parameters. The first, which systems programmers don't control, is buffer size. The database administrators control the buffer size based on the attributes of the underlying datasets so the systems guys have to take their complaints to them. Sometimes it's worth a shot as the database dataset definition may have changed and no one thought to change the buffer size.

The second consideration is the maximum (MAXTHRD) and minimum (MINTHRD) parameters which control the number of DRA threads CICS has to IMS. DBCTL creates the minimum number of threads when CICS connects. These threads are never terminated throughout the connection's lifetime. MAXTHRD is the maximum number of DBCTL threads for a CICS region. If a region reaches MAXTHRD subsequent transactions must wait until an active task relinquishes a thread.

Based on this information we can see than CNBA should equal the number of buffers per thread (FPBUF) multiplied by MAXTHRD. The overall CICS buffer requirement will be CBNA times the number of regions in the LPAR times the buffer size. However, you will use more depending on database types and IMS system requirements.

Failure to set these numbers correctly can have a number of unpleasant effects. If buffers are over allocated there's a chance of blowing out ECSA. If FPBUF is set too low performance will be hurt by constant dipping into the overflow pool. If CNBA is too low CICS may not connect to DBCTL. Furthermore, applications may receive bad status codes. No pressure.

Another Unpleasant Surprise
Each DBCTL thread runs under a TCB in the CICS address space. Each TCB needs system storage (OSCORE, for us old fogies) above and below the line. If you define your DBCTL interface with a high MAXTHRD you may see GETMAIN failure (S80A) ABENDs for storage below the line during peak workloads. The answer is to lower the CICS DSA parameter.

This is counterintuitive. I always thought that CICS allocated DSA as needed in 256K segments. Therefore, there should be plenty of system storage left over as long as DSA requirements are low. This is not true. Instead, all the storage specified in the DSA parameter in a system subpool is GETMAIN-ed by CICS. CICS needs DSA segments and frees the storage from the previous subpool and gets it again in key 8 or 7. Thus, DBCTL threads can exhaust OS storage below the line even if applications are light on the DSA.

Both CICS and IMS create statistics that may help you tune the DBCTL interface. IMS, for instance, writes fast path usage information to type X'5937' log records. CICS writes DBCTL usage statistics, too, indicating the number of threads used and successful PSB schedules. Unfortunately, CICS only cuts these records when it disconnects from DBCTL. This means that you may go weeks without any statistics depending on how you operate.

Again, the CICS DBCTL interface usually works well enough to be the least of your worries. However, tracking trends with IMS and CICS Statistics, as well as an understanding of the interface's requirements, might prevent future problems.

ABOUT THE AUTHOR: For 24 years, Robert Crawford has worked off and on as a CICS systems programmer. He is experienced in debugging and tuning applications and has written in COBOL, Assembler and C++ using VSAM, DLI and DB2.

Dig Deeper on IBM system z and mainframe systems

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.