These questions and answers originally appeared on TechTarget's Expert Answer Center as part of Robert Crawford's two-week tenure as the on-demand expert in January to February 2006. During this time, he was available @18633 to quickly answer questions on CICS application performance and design, as well as to write daily blog entries. Keep an eye on the Expert Answer Center for topics that could help your IT shop.
We have applications that run on CICS TS 3.1 Z/OS and access VSAM and DLI files on a remote VSE system, using CICS function shipping. We now want to access these VSAM and DLI files through batch programs. How can we use CICS definitions from a batch program?
There are a couple of products on the market that allow batch programs to share files with CICS. Usually these products have hooks in the operating system's open/close logic and redirect the batch job's I/O to a mirror transaction running on CICS. However, I'd be sure the product doesn't have a problem with remote files. This type of software also probably won't help you with the DL/1 since IBM removed support for remote IMS databases some time in the early '90s.
A more comprehensive answer may be CICS' External Connection Interface (EXCI). Through a series of application programming interface (API) calls a batch program builds a "pipe" in order to invoke a program in CICS. The online program should be able to access the remote resources just as is done now. You can find more information about EXCI in the CICS library. Question:
Is it possible to share a VSAM file between CICS and batch processing for update? If so, how difficult is it to set up?
IBM's answer would be Transactional VSAM (TVS). TVS, which is based on record-level sharing (RLS), enables VSAM files to be shared between batch and online just like DB2. However, you must go through the exercise of changing your batch processing to include checkpoints and restart recovery. You also have to set up the log streams for the cluster and write or purchase software for dataset recovery. TVS is fairly new and comes with its own sets of operating system requisites so it's going to have growing pains. Given all this, coupled with the added expense of purchasing TVS, I'd advise changing the application to use DB2 if you have the source code and the manpower to dedicate to it.
There are also some products on the market that allow you to update online files from batch. The ones I've seen work by redirecting the I/O request from the batch program and to CICS. On CICS there is a service or mirror transaction that actually performs the I/O. Some of these products work without any change to a batch program. I've seen this work successfully in my shop for many years. However, you still have to be careful about a batch program overwhelming CICS or holding file locks that delay online transactions.
We have a CICS application that has to do many dynamic calls to another COBOL program. How can I get the CPU reduced with this? We have read the LE manuals and did everything they suggested, but to no avail. STROBES point to the overhead of the call.
To be honest I would've recommended dynamic calls as being more efficient than CICS LINK commands. I'm not sure I know the answer, but I may give you some things to look for:
- Have you looked at a CICS trace to see what's happening during the calls? In general you should see an EXEC CICS LOAD followed by some getmains for LE storage.
- Does the called program have a large working storage section with a lot of initialized variables?
- What is your LE storage setting? Do you tell LE to clear all storage to zeros before invoking the program? You may be able to avoid this overhead if the program can work with "dirty" storage.
- What are your LE stack and heap settings? If they're set incorrectly you may incur extra overhead as LE has to get more storage.
- Are your LE runtime CSD module definitions up to date?
Note that you can look at your LE settings online by entering transaction CLER and hitting PF5. Hitting PF10 from the settings display will write them out to extra-partition transient data queue CESE.