Migrating off the mainframe; part 3: Tuning apps for the new platform

After migrating off the mainframe, applications don't always run the same on Linux, Unix or Windows as they did on z/OS. In this tip, an expert talks about how to tune migrated apps for performance and security.

This is the third tip in a series on migrating off the mainframe. This installment covers how to tune migrated applications for the target platform. Also check out part 2, which covers segmenting the mainframe migration process to avoid disruption to end users.

Let us suppose that you have carried out a full port of a piece of mainframe software, and it runs on the target platform. Will the resulting program seem the same to end users? Not necessarily, because both Windows and Unix/Linux differ in significant ways from each other and from z/OS. In order to minimize disruption to end users, you need to tune the new software version to run effectively, with adequate performance and scalability, the desired security, and approximately the same functionality and performance.

To tune migrated mainframe apps, you need to understand how the new operating environment will change the way an app runs. Here is a list of some of the key differences in Windows, Unix/Linux and z/OS that often lead to a need for tuning after migration.

  1. Windows, Unix and Linux are client-server; z/OS is "master-slave."
  2. Windows, Unix/Linux and z/OS have significantly different security schemes.
  3. Windows, Unix/Linux and z/OS have significantly different resource-access approaches, especially in I/O.

The good news is that compensating for these differences is usually not technically difficult. The best practice for target-platform tuning is simply to anticipate these problems and budget time and effort to fix them. Only those who think that this step can be avoided in the name of cost savings or speed of migration typically run into trouble.

Client-server vs. master-slave
The z/OS operating system (and therefore most mainframe applications) was built in an era in which the number of end users of an application numbered at most in the hundreds or thousands, and there was no such thing as a multiplexer. Communication with the outside world was "master-slave" -- dumb terminals could communicate with the mainframe only when the mainframe asked for input. So legacy mainframe apps are built to assume that only one or a few instances of the app will be running at a time, that most of their time will be spent processing, not ingesting end-user input, and that communications will be "bursty" -- delivered and sent in large chunks.

On Windows or Unix/Linux, this approach often does not perform or scale well. These are scale-out platforms, with more frequent and smaller communications, where the remote client can initiate input to the server. As a result, the migrated mainframe app finds it difficult to respond swiftly to end-user requests. And because of the complexity of today's Web architectures, it can be difficult to detect where in the network the bottleneck is occurring.

Typically, the way to analyze the new app's network is with an end-to-end application management tool. Once you understand the likely usage patterns of the app, you should be able to fix many of the problems by adding CICS-like scale-out multiplexing software. Because the target platforms are often under-utilized, it should be possible to "throw processors" at input processing once a request arrives at a particular machine. In other words, tuning the communications software and the process scheduling should handle most, if not all, performance/scalability problems.

Security tuning
Calling this a security problem is a little alarmist. This is not a problem of the scale of denial-of-service attacks or malware. Rather, it has to do with the fact that Unix/Linux, Windows and z/OS each have different core ways of controlling access to software and applications.

Z/OS provides sophisticated specialized security software -- Resource Access Control Facility -- that allows a fine-grained specification of security for each end user. By contrast, Unix/Linux were built to provide very simple security primitives (read, write, and execute for users, groups of users, and administrators), which make security on top of these primitives less fine-grained. Windows (more specifically, the network operating system extension of Windows) offers more primitives and does not confine the types of user to "user, group, administrator."

The result is that when you port z/OS to Unix/Linux, you are undergoing a "double approximation." The environment is approximating the desired security scheme using Unix/Linux primitives, and the migrated app is approximating its old security scheme using the Unix/Linux platform desired scheme. Windows allows a closer approximation to the mainframe's security scheme.

In either case, the danger of security that is too loose or too strict is not very great. Both target platforms have made great strides in providing finer-grained security over the last two decades – however, "small danger" is not the same as "no danger." This is the type of situation for which testing tools' function and stress test suites were built. Careful application of these suites should therefore either identify any problems or reassure you that there are none. At that point, incremental changes in the security of particular groups of end users should take care of any problems that you identify.

I/O tuning
Over the years, mainframe and Windows/Unix/Linux handling of some system resources (e.g., processors and storage) has converged. However, there are still underlying differences in each platform's approach in the area of I/O to and from disk. Z/OS offers ISAM, VSAM and roll-your-own data I/O; Unix/Linux provides a simple, not-very-scalable file-oriented indexing scheme (related to inode indexing); Windows offers an approach that includes some, but not all, of the sophistication of the mainframe. The practical result has been that mainframe I/O not handled by mainframe programming languages and data management tools does not translate well to vanilla Unix/Linux, and sometimes not to Windows, either. More specifically, older legacy mainframe apps that include highly tuned assembler data management code may not scale if the migration simply translates this code into the equivalent Unix/Linux or Windows I/O primitives.

The best fix for this problem is to make sure that the migration tool translates this code into more scalable Windows/Unix/Linux I/O commands in the first place. That is what the database companies do: They provide a different bypass of Unix/Linux I/O for each Unix/Linux implementation. Failing that, you should either do performance testing on these apps and then search for and fix the offending I/O code, or just do a global search and replace.

Conclusion
Anticipating the ways that target-platform peculiarities can affect app performance and security can save a lot of grief for the migrater and the end user of the application. Moreover, this marks the end stage of a successful process. Once an app runs, performs/scales and provides adequate security on the target platform, the main goals of migration off the mainframe should have been fulfilled.

However, it is likely that there will be more challenges in the near future. What if the application needs to be moved to an internal or external cloud in the near future? Get-the-job-done mainframe migration does not guarantee readiness for a cloud. Is there a possibility that the app may need to be moved from Windows to Linux or back? Can you get more out of the migrated app, such as composition with other apps to enhance or integrate business processes? What if you have to combine the app with another from a company you just acquired?

A little forethought during the migration can save a lot of time in future projects like these. In part 4 in the mainframe migration series, I'll discuss the relatively straightforward ways that a migrated app can be spruced up during the migration process to meet future needs.

ABOUT THE AUTHOR: Wayne Kernochan is president of Infostructure Associates, an affiliate of Valley View Ventures. Infostructure Associates aims to provide thought leadership and sound advice to vendors and users of information technology. This document is the result of Infostructure Associates-sponsored research. Infostructure Associates believes that its findings are objective and represent the best analysis available at the time of publication.

What did you think of this feature? Write to SearchDataCenter.com's Matt Stansberry about your data center concerns at mstansberry@techtarget.com.

This was last published in November 2009

Dig Deeper on IBM system z and mainframe systems

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close