I have a general question about what is possible in the realm of sockets when communicating with a CICS mainframe.
We have a piece of middleware software that collects and consolidates data from various sources and then passes that data on to various external systems such as ODBC databases, etc. We have provided a generic XML protocol over a TCP/IP socket connection to allow flexible connections to any backend host computers. The problem we are now hitting is a client who claims our topology/protocol isn't possible for them to handle in their environment.
Here is our system in a nutshell: We open a socket connection to the remote port and send what amounts to a logon message. Assuming we receive a favorable response we then send data messages up, each of which has a response that can contain error information or possible result from the remote processing. If a specified amount of idle time elapses between these data messages, we send a kind of keep-alive message that again has a response. Finally, when the overall processing is complete send the equivalent of a logoff message.
A few additional details: Because our software has no guarantee of how fast the host program will process any individual data message, we support a small fixed number of simultaneous connections so that long-running tasks won't totally block other shorter-running ones. The volume of these data messages varies from many-per-second to periods where minutes (or even perhaps hours) can go by without any new data. This approach works fine for the likes of a Java program running on a Unix box, but does it really present such a problem in the CICS world?
The problem is that hanging around on an open TCP/IP socket requires a CICS transaction on the end to get the flow when it arrives. You cannot use a listener-type techniques because that's only processing initial binds -- not just something that takes ages to arrive.
The only way to accomplish what you want is to use a CICS waiting mechanism, and poll the socket at frequent (as often as you think prudent bearing in mind that while waiting you will not be processing the flow from the client) intervals. This could go on forever, and so you are in danger of ending up in a situation whereby all your CICS active transaction slots are filled with transactions just polling away and not doing anything of interest.
One way around this is to use the CMAXTASK facilities to limit the number of the second service transactions in the region, but that means that the clients will be hanging around in cases of stress not getting anything back from CICS. You can have multiple CICS regions servicing the flows (using MVS TCP/IP port sharing) to alleviate the conditions, but you still may end up in an apparent application wait state.
Whether or not the client has been coded to cope with a volume-related stall (somewhat unlikely as Unix applications tend always to assume that infinite resource is available and so trivial volume-related stalls never occur) the first service transaction can use XC INQUIRE facilities to detect the number of active second service transactions and then simply refuse to start the second service transaction if capacity is exceeded (it might be nice to return an error message).
The connection may also TCP/IP timeout at any point -- so ensure that you CLOSE() the socket before XC RETURNing.
So, the last bit of code in the second service transaction should look like:
CURTRY = MAXTRY (whatever you decide is the maximum number of intervals)
waitloop: RECV(PEEK) ; if non-zero, reloop on the receive logic IF ( CURTRY <= 0 ) then end by CLOSE()ing the socket ; XC RETURN XC DELAY INTERVAL(n) reloop to waitloop
You will have to decide upon how long a client can keep the connection going by sending an heartbeat; is it valid to have an inifinite number of these? If this is the case, then each socket/connection will have an associated CICS transaction and this will potentially eat up the number of CICS AMAXTASK slots -- so you could get into a stall whereby everything is not running as they are just awaiting a next flow from the client.
Similarly, think about defining the maximum number of heartbeats (both per socket and per not-doing-anything) and policing that count.
I'd hope that as this XML protocol is under you control, you could include on the initial logon-type of flow some sort of control information that would set a maximum time interval for the lifetime of the socket and the Heartbeat properties. The second service transaction could use that info to limit the volumes of inactive sockets.
Similarly, you should be able to code up the client so that its receipt of the security identification/lgon response can respond to a too-many-active-at-the-moment-retry-later can be coded around. It already should be able to cope with an error because the Socket is no longer open (because the server/CICS has closed it).
So, it all comes down to restricting the number of 2nd Service Transactions hanging around just awaiting for something interesting from the Client.
You can tell your doubting customer that I've said it's OK to use your software within CICS! But you should ensure that your protocol is sufficiently flexable first.
Dig Deeper on IBM system z and mainframe systems
Related Q&A from Robert Crawford
For better mainframe capacity planning, how do I convert CPU hours to MIPS? And is there a way to calculate the relationship between MIPS and MSUs? Continue Reading
I have two years of experience in mainframe technology, currently working as a mainframe developer. I want to change to Java technology. Continue Reading
I want to replicate DB2 from the mainframe to an AIX box since it's cheaper and the copy can be used for testing. Is this possible? Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.