Suppose your users still need the functionality of an application now running on the mainframe, but you can no...
longer support it. For example, the developer/administrator who has been upgrading the application may be leaving, or you may simply have decided to stop spending money to upgrade it. What are your alternatives? In this tip, I will explain the advantages and disadvantages to several application migration tactics, such as application modernization, re-engineering or distributed integration.
Broadly speaking, you can replace the app (hopefully), convert it to a language/operating system/platform that you can support and go from there, or treat it as a “black box” and create a separate sub-application that handles all upgrades. The last two of these strategies involve use of the following widely available technologies:
- Mainframe application modernization, which involves placing a Web service provider interface in front of the application to handle all interactions with the app;
- Re-engineering, which takes the functionality of the application and converts it to equivalent functions on another platform; and
- Distributed integration, which coordinates sub-applications running on multiple virtual or physical machines so that they act as one application.
Let’s consider the pros, cons and difficulties of these strategies and technologies.
Replacing the application
In an ideal world, application replacement would be the best alternative at all times. Just consider a new application that “magically” appears and runs on the most cost-effective platform you have, costs nothing to develop or acquire, and exactly duplicates the mainframe app you have. In the real world, this is usually the worst alternative. Available packaged app choices don’t mimic your application, causing end user anger; development from scratch is costly, takes money from key IT initiatives, and often fails because the functionality of the existing app is poorly understood and documented; and there’s no good Plan B if one way of replacing the app fails.
Most successful application replacements that involve business-critical mainframe apps take a “good enough” approach. That is, they outsource the rewrite to an application migration shop that has both mainframe and alternate platform (typically Linux) expertise; they build on top of foundational software that has been time tested (open source or not); they focus on the key functions of the old mainframe app, and do their best to minimize the resulting end user dissatisfaction with a different, crippled new app; and they adopt a “parallel track” approach to deployment that makes sure the old app is retired only when the new app has been working in the field for a year.
The best target operating system for these replacements today, counterintuitively, may often be Linux on the mainframe: Linux because it allows further changes of platform as needed with minimal app modification (mainframe Linux is slightly different between the mainframe and the scale-up Unix or scale-out PC server, but the new app can be written to be completely portable); and the mainframe because, as of today, many users report that it handles app scaling most cost effectively. As one CTO said to me recently, the mainframe is telling me that it’s at 98% of capacity and can handle lots more, while the scale-out server is telling me that it’s at 50% of capacity and we need a new box. This basically means that scaling applications is cheaper on the mainframe because it’s very expensive to add more conventional servers to a scale-out setup.
Conversion and distributed integration
Mainframe applications continue to be modernized, involving both support for Web browser and PC access and Web service provider interface. However, many applications have not yet been modernized for this kind of Web support – and your application is probably one of them. Major mainframe software providers have plenty of tools and services for the task, whether you want to add a Web-service provider interface before moving the application or after it arrives on the target platform. There is no hard-and-fast rule about which approach to application migration is better, but the more you know about the structure and actions of a mainframe app, the easier its encapsulation as a “Web service provider.”
It is easy to underestimate the usefulness of mainframe application modernization. However, app migrators should understand that if they want to set up an internal cloud or use an external one, the migrated app will not be usable in the cloud unless it is converted to a Web service. Cloud implementation is far, far easier over the long run if you virtualize your app as a Web service.
Re-engineering is a good means of application migration for applications in which abstracting a model of the program (figuring out how the program actually works) is feasible. The side effect of re-engineering a mainframe application is typically a model standardized to allow modification during development’s design stage (e.g., UML). This abstraction allows not only generation of app instances for all major programming languages and environments (e.g., Java on all platforms), but also much easier modification and upgrade of the program, by modifying the design rather than the app itself.
The key task of the smart “re-engineering specialist,” therefore, is to use a re-engineering tool from vendors like IBM, CA Clerity Solutions Inc., or Micro Focus to generate a standardized model suitable for the enterprise’s existing development process. This will ensure that if the enterprise later wishes to port the app to another platform, or if it wishes to modernize, extend, or combine the app (i.e., make it part of a composite app), the process of doing so will be swifter and far less risky.
Distributed integration is about running the mainframe app (as a “black box” or as a converted application) on two or more target platforms, as in IBM's zEnterprise (Linux on blades and Linux on the mainframe). In the case of “black box” distributed integration, it usually makes sense to use either z/OS or mainframe Linux as one of the platforms to minimize the risks of platform movement. More typically, distributed integration involves moving to two target platforms instead of one, but integration tools between the platforms (virtual machine administration, networking and scheduling, for example) make the actual integration of the resulting sub-apps pretty straightforward.
The key to success in distributed integration is usually to aim for a target app distribution that will maximize app performance. For example, an app that benefits from scale-up should, as far as possible, put the heavy-duty processing on a scale-up server (scale-up or mainframe Linux), with the scale-out parts of the app performing “edge” tasks, such as caching. A scale-out-type app should put multiple parallel copies on multiple virtual machines across a grid-type or blade-type network, with central administration on a scale-up machine that is less likely to fail.
Many apps fall into the “in-between” category that zEnterprise targets, with some parts best handled by scale-up and others by scale-out. In that case, users should look for the maximum support by administrative tools of cross-platform administration. Note that it is now possible, with some effort, to have Windows as one of the target platforms – by Windows emulation on z/OS, for “black box” implementations or recompilations of Linux versions to Windows (with recoding required for up to 20% of the code).
Another key task of distributed integration is to set up the application to be ready for distributed communication and integration with other distributed applications. It is a frequent misconception that this involves setting up standardized communication between applications. In reality, the key aim of distributed integration is to set up a standardized way to exchange data between applications – and, more specifically, to define the app’s metadata and include it in a global metadata repository. Data-combining solutions like ETL or IBM’s Information Server can then use the data to do master data management, cross-enterprise reporting and the like.
As noted above, conversion is about changing the code of the existing mainframe app so it will run and be upgraded on a different platform. The goal, therefore, is to change things as little as possible, and in as automated a way as possible. You may also want to have it split between multiple platforms, or upgrade it in the process to allow composite-application support or better cloud or Web capabilities.
There are three basic ways of converting this type of mainframe app:
- Set up a target environment with target-platform versions of foundation software, and then rewrite the parts that aren’t supported. For example, mainframe COBOL apps that depend on CICS, MQSeries, and DB2 can take advantage of vendor conversion programs, while interleaved assembler code will need rewriting. Linux foundational support is available from vendors and from application migration outsourcers such as Clerity Solutions Inc.
- Re-engineer the mainframe app as described above. Again, application migration outsourcers have gotten much better at this, but it is still difficult or impossible to do in some cases.
- Rewrite the app as necessary piece by piece. Experience shows that this can be almost as risky as, and more costly than, replacing the application. Nevertheless, for business-critical apps, this may be your best option.
In the long run, re-engineering the app is likely to be the best choice, because it fundamentally modernizes the code and gives you great flexibility in making further changes to the app. In all three cases, application modernization is a good idea while converting the app, since its likely use will be as part of a Web-enabled organization, and perhaps even a cloud. Use of distributed integration is optional and, in many cases, will prove to be too complicated to include in the conversion process.
“Black box” distributed application
A “black box” approach to withdrawing support from a mainframe app is not a great long-term solution, and makes further changes to the app difficult, but it is the least costly and risky approach to handling a loss of app support. The basic idea is to leave the app in place, but to surround the app with a Web service provider interface that makes the app look to the developer, administrator, and end user as a Linux or Unix app with the same behavior. When upgrades are to be added, the interface forks end-user commands to new code in the new platform that handles the new cases.
The reason this works in the real world is that most, if not all, bugs in the mainframe app from now on will be problems of scaling beyond what it has ever done before. The Web service interface can handle these problems by throttling back demand and by rewriting the functionality, which typically creates the problems. It’s not high performance, and it has its own risks – assuming the problem is solved and then discovering it’s not at the worst possible moment – but, carefully done, it’s fast and cheap to implement, and very low risk.
By definition, “black box” distributed application implementation includes application modernization (the provider interface) and distributed integration (the provider-interface code and the new code are on the new platform, but the “black box” is on the old one), but does not include re-engineering. That means that your mainframe environment should have Web-service provider interface development support and cross virtual machine networking and administration – and most mainframe environments today do indeed have that, out of the box. So if you make Linux on the mainframe your new platform, you can do it yourself or do it with minimal help from IBM. And, of course, all other environments are just as supportive of Web services provider interfaces, virtual machines, cross-platform networking and development; it’s just that cross-platform administration is likely to require a bit more elbow grease on your part.
Because you can do it cheaper, it’s very tempting to implement a “black box” by yourself. However, if you have the money, I would recommend calling in an outsourcer such as IBM, CA Technologies or Clerity, with the understanding that it will still be cheaper than other strategies. These outsourcers “know where the rocks are” from long experience with other IT shops’ mainframe applications, so they can provide valuable advice about likely future bugs for this kind of app architecture, advice that will be worth its weight in gold for a business-critical app.
Realities of mainframe application modernization
There is one more caveat about all of these strategies: Code gets moved to a new platform, and that means a performance hit. That performance hit may be small, as in the case of the “black box” approach, or large, as in the case of rewriting most or all of the code, but experience shows that new apps rarely perform better in the new environment at first.
How do you choose between these strategies? First you eliminate the ones that aren’t feasible. In many cases, effective replacement can’t be done. In some cases, the code is so customized or impenetrable that re-engineering isn’t possible. Next, you ask yourself, just how critical is this app, and how urgent is the need to withdraw support? For a not-so-critical app, when you have some time to get things right, re-engineering or another form of conversion may be just the ticket. If you need it done badly, right now, “black box” may be the only way.
Only at this point should you consider cost. This is because in many real-world cases, “pay me now or pay me a lot more later” is the rule. It may seem very tempting to go for the approach with the lowest initial cost, such as rewriting the app piece by piece as necessary, or just slapping in a new open-source app that claims to do the same thing. But in many reported cases, such a project foundered in a sea of end-user rage at the new app, downtime and inability to fix teething problems. So initial cost should be carefully balanced against risk and ongoing maintenance costs.
One final note: There are application migration projects to reconfigure mainframe apps to new platforms that do pay off in the long term. The payoff typically occurs either because a Web-service provider interface allows mainframe code to be reused in composite apps, saving major new-app development time, or because re-engineering makes the functionality available to new end users and the code available for new applications. So don’t just assume this is a necessary chore, like a root canal. It may well be a diamond in the rough.
About the author
Wayne Kernochan is president of Infostructure Associates, an affiliate of Valley View Ventures. Infostructure Associates aims to provide thought leadership and sound advice to vendors and users of information technology. This document is the result of Infostructure Associates-sponsored research. Infostructure Associates believes that its findings are objective and represent the best analysis available at the time of publication.