We have an application memory manager written for our application in C++ on top of LE. As part of performance tuning in CICS, we tried changing the initial heap size of the application memory manager to match the initial heap size of LE. We got some interesting results. Given, TxnA (baseline) and TxnB (with changed initial heap size) TxnB took 3ms of CPU time more than TxnA. TxnB took 100ms of elapsed time less than TxnA. TxnA used 18M of user storage where as TxnB used 23M of user storage. TxnA's storage occupancy was 84,810 and TxnB's storage occupancy was 74,754.
We observed all the above information from MainView. In your opinion, which transaction has performed better from a holistic point of view (considering scalability too)?
I will initially admit to a strong bias against C++ programming - it just uses too much memory for my liking and there is the possibility of memory leaks due to unexpected coding side-effects: strangely enough, that's why I prefer to use Java for OOish things.
I think the crucial thing here is the vast quantity of memory being used - I'm assuming that you have done all the storage tuning things as described in the LE tuning guides: but it's probable that the size of storage being used makes the effect of these options (provided they are sensibly set) not the determining effect.
These results, as you observe, are quite interesting: an attempt on your behalf to improve things actually resulted in the opposite effect. Two explanations suggest themselves. Firstly, I can suppose that your memory manager does not require all its allocation initially, and you are getting some reuse of exisiting areas before getting some more storage from CICS via the LE storage pools. Alternatively (as the reasoning depends on how much memory is initially acquired and how this is expanded) the initial LE heap size/initial application memory manager size is too small for your processing - in which case you are doing additional memory allocations (which bumps up the CPU required) which are in turn possibly fragmented (so more memory may be required anyway). Which one of these explanations fits depends on your code and LE settings!
Try bumping up the initial LE memory allocations to match the initial requirements of the applications using your memory manager (so the latter has a big enough area to service the initial and the next few requests).
I think I would be inclined to try another programing model (but this depends on how your application memory manager is coded) whereby you do an XC GETMAIN FLENGTH(bigsize) to get the area which you are managing as opposed to using the LE Storage mechanism (via the C memory functions) - you would avoid the C++/LE memory mechanisms and so improve performance (maybe).
However, your question really comes down to whether it is better to save on CPU or memory. Again (this is a RAHism) I'm never too excited about the quantity of memory used by an application: it's useful to do some tuning, but it takes whatever it takes. Sizing based on this is important for capacity planning and system configuration. I'm also not too concerned about CPU costs either (people cost more than MIPS!) unless the activity volume is high enough to make an impact on capacity (and I do not get commisssion from the IBM Hardware Sales people!). Given a tradeoff between bigger memory&lower CPU as against less memory&bigger CPU I would always tend to go for the lower CPU& bigger memory option - but that's just me.
This was first published in January 2003