Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Four ways to add flexibility to mainframe applications

Performance versus flexibility is a classic trade-off in computer systems. Here are some tips for increasing the flexibility of mainframe legacy applications for today's needs. Options include using routing regions and dynamic control block structures, implementing version tolerance, and externalizing referential tables.

Performance versus flexibility is a classic trade-off in computer software and systems. Back in the days of 64K...

machines with clock speeds measured in TIPS (thousands of instructions per second) performance was tantamount. But, with faster machines, Web interfaces and IT skill shortages, flexibility is more important. Therefore, here are some tips for increasing the flexibility of mainframe applications as we try to hammer square legacy applications into the round hole of today.

Add a level of abstraction or indirection
CICS unwittingly introduced a level of abstraction or indirection years ago with terminal owning regions (TORs), or as they can be generically called, routing regions (RRs). It is more efficient to connect a client directly to the CICS that will process the transaction. But, for a little more CPU and fewer headaches, you can attach the client to a RR that will decide the transaction's ultimate destination. This flexibility pays dividends in workload distribution and availability.

Other systems services can provide this type of function, such as WebSphere MQ shared queues, Sysplex Distributor (SD) and, to a lesser extent, VTAM generic resources. Whatever facility is used the idea is the same: put something in the middle that can make an intelligent workload routing decisions based on performance and resource availability.

External referential tabular data
Internal tables that contain referential data will always perform better than external tables or a database. However, changing internal tables invokes the program life cycle, regression testing and a production implementation. Externalizing tables costs slightly more at execution but reaps benefits by getting changes to production faster with less fuss.

Dynamic control block structures
Dynamic control block structures apply more to system level software, although some of the more sophisticated applications or corporate utilities might benefit from this as well. In the "good old days," static structures performed well and changing them wasn't a problem as no one used the system after 18:00.

Obviously this is not true any more. Home-grown system and application software needs to maintain and update their structures without disruption. Think about changing that sequential table into a linked list. Consider how you might go update the control blocks without bouncing an address space. Lastly, instead of trying to predict how many blocks are needed, add logic into the software and build the structure as you go.

Version tolerance
When the mainframe was the only game in town we thought we had this problem licked. Yet, thirty years into the client-server "revolution," it is still with us, only worse. Usually we have a situation where UNIX and Windows clients ask for data from hosts via strict application (as opposed to network) protocols. As a result, we often find ourselves in situations where a client and host pieces must move into production in lock step or neither will work. Even more frightening is the thought that the new release won't work and both will have to be backed out before customers start logging on Monday.

There are a couple of ways to tackle this problem:

  • Module versioning: Add some sort of version qualifier to the application routine names, client and server, allowing several different versions of the same module to be used at one time. The downside is that someone has to write router code to invoke the correct module instance. This also will require extra virtual and real storage costs for multiple load modules.
  • Message versioning: This is probably the cleanest implementation where an interface version number is included in each message exchanged between the client and server. While saving some storage and maintenance requirements for different versions of the same module, it requires any application message processing logic to recognize the interface levels and how to deal with them. The other challenge is deciding when a particular interface version is obsolete and its code can be safely removed.

Version tolerance means something a little different to system level software. It assumes the operating system and all its myriad subsystems can interoperate with different levels of themselves, both maintenance and release.

IBM and other vendors will have to write version tolerance into their systems.
,

Customers won't be able to do this on their own. IBM and other vendors will have to write tolerance into their systems. They will also have to add version interoperability to their already bloated testing scripts. Lastly, fix documentation will have to be a lot better about identifying tolerance program temporary fixes (PTF's) and instances where a phased implementation isn't possible.

Improved performance is worth the investment
The investment is large but the payoff is huge. Imagine a world where a shop can IPL an LPAR with new maintenance Sunday morning. If all goes well they leave the LPAR at the advanced level and introduce it into the Sysplex where it can take on production work. The testing continues all through the week building confidence that the new software is safe. Then, the next weekend, operations can bring the rest of the LPAR's up to the new release while the systems programmers sleep snugly in their beds.

System versioning is where one of the mainframe's strengths, fewer maintenance points and system instances, becomes a weakness. I also know this idea won't always be possible and gives some people the willies, but it is something we're going to have to for the mainframe. I've seen it work on UNIX and can attest to the improvements to availability and system administrators' quality of life.

With my CICS background I tend to err on the side of flexibility. My preference has been encouraged by availability requirements and dealing with some, um, "quirky" applications. However, as a mainframer, dedicated to cost savings and squeezing every last drop out of our machines I recognize the importance of performance and simplicity. Somewhere there's a sweet spot and I keep looking for it every day.

ABOUT THE AUTHOR: For 24 years, Robert Crawford has worked off and on as a CICS systems programmer. He is experienced in debugging and tuning applications and has written in COBOL, Assembler and C++ using VSAM, DLI and DB2.

Did you find this helpful? Write to Matt Stansberry about your data center concerns at mstansberry@techtarget.com.

This was last published in September 2008

Dig Deeper on IBM system z and mainframe systems

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close