News Stay informed about the latest enterprise technology news and product updates.

Weighing the costs and risks of mainframe application modernization

The business benefits of mainframe application modernization outweigh the risks of allowing an outdated legacy application remain as-is. This tip discusses considerations for updating business-critical mainframe applications.

This is the second installment of a three-part series on modernizing business-critical mainframe applications....

Check out the first chapter, on developing a mainframe modernization strategy..

More on this topic:

IBM enhances mainframe software for SOA enablement

IBM system z9 and information integration: What's the connection?

IT often fails to understand the costs and benefits of mainframe application modernization and thus adopts a blanket as-is strategy that defines all mainframe applications as untouchable except for routine maintenance.

The net result of the as-is strategy has been a constant increase in the following inventory costs of legacy applications.

  • Maintenance costs, particularly costs associated with outdated applications about which key information has been lost and whose software and hardware is no longer adequately supported.
  • Opportunity costs, those costs incurred as maintenance spending crowds out new application development and packaged application spending.
  • Inefficiency costs, or costs incurred as the failure to proactively upgrade causes crisis mode, costly application fixes, periodic directives to move to a new platform, and wasted time and effort on flawed or failed major software improvement projects.

These costs often feed on themselves in a vicious circle. Choosing to maintain instead of improve an application means that the application continues to age, thus not only increasing maintenance, opportunity and inefficiency costs, but also increasing the gap between the legacy application and current technologies. This age gap in turn makes improvement more costly, which makes the organization more likely to choose the as-is strategy.

Replacing a mainframe app
Now consider the replace strategy. Superficially, this seems more attractive than ever, with an ever-widening array of packaged applications to choose from and with greater benefits from the new technologies such as Web services baked into the new applications.

However, because the application to be replaced is mission-critical, the new application must at least support the features of the old, and preferably the old business processes. Meanwhile, the old application, from decades of the as-is strategy, lacks documentation, experts and cultural willingness to move forward. As a result, replacement by a new application can involve loss of features, inadequate support for business processes and cultural resistance that will prevent implementation -- and when the application is business-critical, its replacement can be business-threatening.

Regenerating or migrating mainframe applications
Moving the application to a new platform by regeneration or migration can offer clear advantages. Past research by Infostructure Associates personnel shows that some medium-scale IT shops that have migrated individual applications from mainframes to Wintel platforms are seeing total cost of ownership (TCO) savings of up to 67%, plus significant increases in price/performance and flexibility. These improvements are primarily due to the Wintel architecture's lower acquisition and software license costs. However, if the mainframe is supporting 20 applications or more, our research shows that the mainframe usually offers savings versus Wintel or Linux systems in administrative and software costs -- if applications are modernized.

Migration, in particular, also offers extensive, automated tools for handling conversion of COBOL, FORTRAN, CICS, and DB2-based applications, although ISAM-based and assembler-based applications are not as well supported.

The costs of not modernizing applications often feed on themselves in a vicious circle.

However, the risks and costs of moving key mainframe applications to Unix or Windows can be large. Mainframe applications, especially those highly tuned for performance, are often so customized for the mainframe that they cannot simply be copied from one machine to another. Instead, those migrating the application must have a deep understanding of the application's code and purpose. In some cases, migraters must rewrite much of the code in the application to run and deliver optimum performance on a very different type of computer, which could take months or even years. As a result, performance-critical applications may not perform adequately even after tuning, and new errors may creep in during the process, making the resultant application unusable. There are now tools to automate much of the process and avoid these hazards, but many enterprises are not yet applying best practices in migration.

Often, the application's documentation, or IT's knowledge of the application, has been lost or is inadequate. While automated migration tools can handle some of these cases, these tools often fall short. Regeneration, likewise, is hard-pressed to abstract to a design in cases where structured programming techniques were not followed in the first place (a common flaw of mainframe programs) and where no documentation exists to deduce the underlying design.

For mainframe applications sharing a common data store, migration or regeneration is even harder. If the data for one application is moved to a new platform and database -- perhaps to integrate with applications on the new platform -- then the applications remaining on the mainframe will need modifications in order to access their data from the new database, or application code must be written to keep the two databases in sync.

Scaling the migrated application likewise is often tricky. Today's application servers, although superficially similar to traditional mainframe transaction processing (TP) monitors, aim at load balancing across application code, not transactions. As a result, mainframe applications that scale out to a few machines using mainframe TP monitors, such as IBM's CICS and Unisys' COMS, will usually require extensive adaptation to scale out to thousands of PC servers on Linux/Unix using Apache or JBoss, or in Windows using Microsoft's application server.

Typically, regeneration is used much less than migration, because the process requires greater application knowledge, and regeneration is perceived to be applicable to fewer applications. However, regeneration increasingly makes sense for virtualization, even if the mainframe application will remain on the same platform.

Upgrade in place makes sense
The key watershed in showing that upgrade in place is feasible was Y2K. Implementation of Y2K made it clear that even mainframe applications hitherto thought untouchable could indeed be modified successfully at a very low level in the code. Moreover, upgrading mainframe applications in place is more feasible than ever before because upgrade tools are better.

A side effect of vendors' Y2K efforts has been an extensive set of field-proven tools to upgrade mainframe applications. Middleware such as Unisys' ClearPath MCP solution and IBM's application modernization or enterprise transformation offerings, and Web-servicization tools such as those provided by IBM, allow connectivity from the Web to the mainframe and permit users to create application veneers that add e-business functionality, such as Web services provider code and/or composite-application business-process support for supply chain and customer relationship management. Conversion to Java for use in Linux on the mainframe by vendors such as as Clerity/Veryant is now also an option.

As a result, slowly, the great mass of mainframe applications has been Web-servicized, or is in the process of being Web-servicized -- although a large number of these applications have yet to be modernized. In other words, the attractiveness of upgrade in place by Web-servicization is being proven in the real world.

Mainframe virtualization on the horizon
Web-servicization is only part of virtualization. To disconnect a mainframe app from a physical platform, IT must also convert all of the app's invocations of platform-specific code and resources to invocations of platform-independent middleware. One way to do this is to convert the code to Java, which means that the app will run on Linux on any hardware platform inside its own "Java virtual machine." However, some hard-coded features of a mainframe app, such as ISAM, VSAM, IMS or CICS invocations, may have no obvious analog on non-z/OS platforms. In that case, a modernization, reengineering, or migration tool, service, or vendor may supply both Windows and Linux equivalents.

Ideally, however, the virtualized mainframe app should exist in a form that has no physical dependencies. This means that the app either uses only platform-independent standards such as SQL or runs on software that handles all data and resource access, all distribution, and all Web support -- and this underlying software runs on Windows and Linux on all platforms. For example, applications using only .NET can run on all major Windows and Linux hardware.

Because we are in a time of severe cost-cutting, few IT organizations have fully virtualized their mainframe applications. However, as users understand that importation of mainframe apps into a cloud that may not include mainframes requires not only Web-servicization but also full virtualization, we may anticipate increasing virtualization of mainframe apps.

Check out part 3 of this series, in which Kernochan outlines the potential of Web services for the mainframe.

ABOUT THE AUTHOR: Wayne Kernochan is president of Infostructure Associates, an affiliate of Valley View Ventures. Infostructure Associates aims to provide thought leadership and sound advice to vendors and users of information technology. This document is the result of Infostructure Associates-sponsored research. Infostructure Associates believes that its findings are objective and represent the best analysis available at the time of publication.

What did you think of this feature? Write to's Matt Stansberry about your data center concerns at

Dig Deeper on IBM system z and mainframe systems

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

What I never see in any articles about migrating off the mainframe, is the task of migrating the data to a format than can be used on a network app. I also never see anyone talk about the storage differences, and the amount of storage needed on mainframe vs network for archived and historical data.

You can't run a 3590 tape dataset to a network based app, the data has to be converted. And what about current storage requirements. Is the compression ratio the same on mainframe DASD as opposed to network storage solutions?

We cut one small app over from the mainframe to Oracle on Linux (network servers and disk). We ran out of space for the current application data in the first week. No one thought about it and no one calculated it. I don't even know if there's a formula to do it, yet I will have to because we are in a new round of move-it-off-the-mainframe.

Lastly, there's DR, which absolutely no one ever discusses at all. The app we moved off? No one ever even thought about the hardware requirements. They just wanted the vendor to write up a plan and take regular backups. The fact that we had nowhere to restore it to, was completely lost on everyone (obviously, except me). It's out revenue system.It can't be replicated, and neither can anything else on our network, including the network itself. On the other hand, 10 3590 tapes and the whole mainframe can be replicated at Sunguard in less than 8 hours.

Why do we want to migrate to networks again?