I recently coded two GETMAIN calls in a CICS program that is used heavily. The two calls acquire 2542 ad 968 bytes of memory spaces respectively. The GETMAIN is not using SHARED keyword so it releases memory when the task is over.
After the change was moved to production, we got a call from the CICS admin team that the transaction is acquiring memory and not releasing it. The system admin looked at the spool for the current CICS job for REAL parameter and said that the regular value there should be between 10-15T (equivalent to 10-15MB) but when the program was running, it went up to 48T and that is not good. In my opinion, it should not have acquired the memory permanently as it does not use the SHARED option but am not sure about the reason for excessive system storage being used.
The other thing that might be of some help to identify the problem is that this program is the first one in the long chain of programs. We are reading some POINTERs from a TSQ and loading all such pointers to this area acquired through GETMAIN. The variables exist in the LINKAGE section hence the use of GETMAIN. Previously the variables were in the working storage section and we were not reading the TSQ to populate the pointers.
My question is could GETMAIN, without SHARED, hold the CICS storage and not release it? Could the excessive number of hits to the transaction have caused this to happen?
I've been having a bit of think about this and confess that I've not had a lot of inspiration relating to your observation.
Only two things come to mind. You are probably going to have to turn on storage trace in the production region and see where it is going on. A snap dump might also be needed to look at the storage subpools used by your transaction.
Anyway, the first idea is (somewhat simplistically) that you should check that you are not in some sort of processing loop which means that the transaction never ends. Only when the transaction instance ends will the 4kish chunk of storage get released.
I don't think that the use of a TSQ to build these areas (for whatever reason you are doing it this way) or the depth of the program stack has anything to do with your observation. And you would have to be running an huge quantity of concurrent transaction instances to bump up the region usage by 30MB!
My second thought is that you are seeing a side effect of a different CICS regional environment in your production CICS as opposed to the development one. If you are running the production region on hardware that supports transaction isolation and have turned this on in the SIT and in the transaction definition and your storage is being obtained in user-key then you are actually getting rather more than 4k of storage. In fact, in this situation the transaction instance actually grabs a 1Mb area (above the line) which is somewhat surprising if you only XC GETMAIN a single byte.
You can see this effect by taking a dump and looking at the storage subpools. This idea might be closer to the observed circumstances but it does not convince me as the effect would already be apparent in the production region.
CICS Technical Strategist -- CICS expert at Search390.com
Editor's note: Do you agree with this expert's response? If you have more to share, post it in one of our .VO7aaqqaAFk.0@/search390>discussion forums.
Dig Deeper on IBM system z and mainframe systems
Related Q&A from Robert Crawford
For better mainframe capacity planning, how do I convert CPU hours to MIPS? And is there a way to calculate the relationship between MIPS and MSUs? Continue Reading
I have two years of experience in mainframe technology, currently working as a mainframe developer. I want to change to Java technology. Continue Reading
I want to replicate DB2 from the mainframe to an AIX box since it's cheaper and the copy can be used for testing. Is this possible? Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.