We replicated one of our DB2 tables into a CICS maintained data table with 700,000 records. Calls to DB2 were replaced with calls to the data table. CPU usage went up. Our system is Threadsafe. Could the TCB switches caused by the data table calls be more expensive than DB2 calls? The data table is in an FOR accessed by multiple AORs.
If you're running CICS/TS 2.2 or earlier, the time spent in DB2 subtasks doesn't get collected in the CICS Monitoring Facility (CMF) bucket USRCPUT so the application won't get charged the true cost of running. When you went to data tables CICS could account for all the CPU time which is reflected in your results.
If you're running CICS/TS 2.2 or later the CPU in the open (L8) TCB's is included in USRCPUT. However, there is a great deal of overhead involved in function shipping to a file-owning region (FOR) depending on the type of file processing your application performs. Not only is there additional CPU in the AOR as it packages up the request, the FOR has to spend time attaching tasks and returning the requested record.
I have the following suggestions:
I'm not aware of any task switching requirements for data tables. In CICS/TS 2.3 or earlier CICS tasks won't swap to an L8 TCB until the first DB2 call. For CICS/TS 3.1 you have to tell CICS to start the transaction on an open TCB (I don't know the parameter) from the start. You ought to look at an auxiliary trace to see if indeed you're transactions are moving between TCB's.
If your application only reads the data table there's no reason you have to use an FOR. If possible, run a performance test of your application with the data table local to the AOR and measure that against the current performance.
If the file requests must be function shipped try tuning the application to avoid browses and updates. You may also want to take advantage of MROLRM which keeps the mirror tasks alive on the FOR until the calling transaction takes a syncpoint.
Dig Deeper on IBM system z and mainframe systems
For better mainframe capacity planning, how do I convert CPU hours to MIPS? And is there a way to calculate the relationship between MIPS and MSUs?
I have two years of experience in mainframe technology, currently working as a mainframe developer. I want to change to Java technology.
I want to replicate DB2 from the mainframe to an AIX box since it's cheaper and the copy can be used for testing. Is this possible?