We are currently trying to convince one of our largest customers to go to CICS/TS 1.3 from CICS 4.1. We use our...
own benchmarking program. Their systems are all VSAM -- and large I might add, with some files spanning multiple 3390 volumes and some data tables. The VSAM files, especially the heavy hitters, are in their own pools and separated. In most, we allow CICS to calculate LSR, hit ratios are around 90% buffer. To benchmark, a simple COBOL program is used to spike a VSAM file with 10,000 adds, reads and deletes. We use a stripped down SIT with none of the traditional overhead turned on. We use SDSF to submit the STC, we use sequential terminal to kick off a transaction and invoke the benchmark program so human intervention is held to a minimum. We use identical SIT parms between 4.1 and 1.3.
When we compare the QR TCB in both releases we are seeing anywhere from a three to 10% increase in CPU consumption. We did have one of their production regions running 1.3 but had to back it out due to increased CPU with no changes made except to upgrade from 4.1 to 1.3. We do not use CFlogs. We have used SMF88 to tune the logs and have offloading down to a minimum. We do use the default LGDFINT of 30. Our resource folks use MICS to measure SMF 30(system) to 110(performance). Usually these reports show that 80-85% of the CPU usage is attributed to transaction activity, acceptable. Under CICS/TS 1.3, it's only about 60-70%. Why has this has occurred?
From the very full description (thanks), I cannot see that you have missed out anything vital on the tuning front.
While around about 5% CPU increase might be expected in the migration -- using upto 10% more CPU seems to be on the high side and somewhat unexpected. I'm assuming that you have checked your accounting routines so that double counting does not occur.
For CTS 2.2 we changed the Log Defer Interval (LGDFINT) default down to 5ms: The shorter the time the more CPU is used -- and as you are using 30ms, that means less CPU is used because more buffering of logging activity is being done by CICS.
I think -- from your email address -- that you are a vendor/ISV and so have access to those special sort of support channels: I'd recommend you raise this issue via that route.
Before you do that, here are a few things to think about which may point to something specific:
Look at the frequency of actual I/O to the journal files in 4.1 and to the logger in 1.3 -- is there any sort of mismatch which might help explain things?
Consider the actual transactional throughput -- has this significantly diminished with CTS 1.3?
Do the stats show that all Transactions have started to slow down/use more CPU or is it only a subset?
Is there sufficient storage available for the region?
Have the Language Environment settings used by the region somehow been changed so that extra functions are unexpectidly active?
Were the application programs not running in the Language Environment before but are now running inside a Language Environment enclave?
Nevertheless, CICS 4.1 goes out of service at the end of the year and so migration SHOULD have occurred by then -- it might well be the case that some sort of region split has to be implemented -- even if only for a short term -- to avoid running in an unsupported fashion.
Dig Deeper on IBM system z and mainframe systems
Related Q&A from Robert Crawford
For better mainframe capacity planning, how do I convert CPU hours to MIPS? And is there a way to calculate the relationship between MIPS and MSUs? Continue Reading
I have two years of experience in mainframe technology, currently working as a mainframe developer. I want to change to Java technology. Continue Reading
I want to replicate DB2 from the mainframe to an AIX box since it's cheaper and the copy can be used for testing. Is this possible? Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.