Let's suppose that you'vee embarked on a project to move all workloads off the mainframe. Typically, the ones you care about are older, run on z/OS and depend on something unique to the mainframe -- COBOL, IMS, CICS and the like. Let us further suppose you have carried out the strategic best practices from part one: triaging the software to be moved, choosing third-party migration tools and setting out a process that involves staging the transition.
The next task in implementing migration is to segment the job; to figure out which software is easiest to migrate and then set up processes and toolsets for different types of mainframe software, each reflecting the ease of migration.
Segmenting applications by level of difficulty to rehost
Because most mainframe apps depend on something unique to the mainframe, users in the past have found that this type of migration is a bit less straightforward than, say, Unix to Linux or even Windows to Linux. Applications written in COBOL that don't use any other mainframe-specific software or firmware can use Microfocus or the like to recompile on the target platform, and in most cases they work "out of the box."
Next in level of difficulty are those applications that depend on COBOL and some flavor of CICS -- these require use of CICS or UniKix on Unix/Linux/Windows. In other words, there are straightforward tools to migrate vanilla COBOL, but recreating CICS in the new environment requires a bit more digging. Beyond that, the task gets significantly harder.
Next up on the list are applications that depend only on COBOL, DB2 and maybe CICS. DB2 translation is not straightforward, although DB2 on the mainframe and on other platforms is now pretty close to equivalent, because most DB2-dependent apps use stored procedures for performance and business rules, and these are often particular to the mainframe. Moreover, multiple apps may be using the same database/data store with different stored procedures, so that recreating some stored procedures on the target platform does not prevent you from having to do the same thing on a later DB2 migration. Still, these ports are not drastically difficult.
Beyond these workloads come those dependent on software that has been out of the mainstream for the last 20 years. These include apps dependent on third-party mainframe databases -- IMS, Datacom, MODEL 204, Adabas -- and IBM data management -- IMS. Many of these apps use proprietary languages (MODEL 204 User Language) or old-time 3GLs (third-generation languages) -- PL/1, FORTRAN. This is a specialty of modernization and migration tool vendors. Therefore, while these are typically not recompile-and-go, most of the rewriting task is now automated in these tools. Plan for the migration to take months, but not years.
As we reach the hardest apps, we start talking about those that depend significantly on IBM Assembler, or on the bare-metal ISAM and VSAM data-storing techniques. Even these are not beyond migration; but regeneration tools are a very good idea, if applicable. These apps often lack documentation and may be indecipherable, while those who understood them may have retired. Therefore, an automated regeneration tool that can figure out what is really going on may well be the only way to recreate them in the new environment. Bear in mind that these are the types of apps targeted by Y2K tools 10 years ago, so some tools to handle even the hardest migration tasks do exist.
Note that even if you have some apps that require such extensive modification that they are effectively impossible to convert to run on the new platform, all is not lost. There are tools available that do "functional analysis" of such applications and capture their behavior. These can then be used to generate an application that mimics that behavior on the new platform, or as the basis for development of an application to replace the unmigrateable mainframe app.
Users' best practices with regard to segmentation have to do with how you employ the resulting mainframe-app classification. Smart users do not focus on getting business-critical apps off the mainframe first, whether they are easy to port or not; that approach actually can slow overall migration, because the resulting process jumps back and forth between fast-migration and slow-migration tools. Rather, these users first find the right tools for each segment and set up a separate process for each, and then optimize that process and allocate resources between processes.
Considering which apps do not need to be migrated should be done after segmentation and process design, because you may find that an app that you first reluctantly rule out as not absolutely vital is easy to migrate, while one that seems essential to migrate at first blush may be less so when you realize that related, easier-to-migrate apps can take up the slack.
The other important best practice in using segmentation is to use the greatest amount of automation (and expertise) possible in the migration tools for each segment, rather than the cheapest tool available. This is particularly true for hard-to-migrate business-critical apps, as they will typically be the largest and most complex, involve the most migration work, and be on the critical path for getting the migration done.
If you have followed the series thus far, you should have a clear idea of what it takes to migrate each piece of mainframe software to the target platform, and which tools and process are most effective in each case for delivering apps that run on the new platform.
However, as many migraters have found, simply segmenting apps is not enough. Segmentation is about figuring out how long it will take to get mainframe software functioning on the target platform. It says nothing about how well the software functions on the target platform: whether there are major performance slowdowns on the new platform, and whether it can be upgraded easily once it arrives. In the past, these have caused large delays in migration and a great deal of end-user disruption. Getting a mainframe to run well on the target platform is the subject of part three.
ABOUT THE AUTHOR: Wayne Kernochan is president of Infostructure Associates, an affiliate of Valley View Ventures. Infostructure Associates aims to provide thought leadership and sound advice to vendors and users of information technology. This document is the result of Infostructure Associates-sponsored research. Infostructure Associates believes that its findings are objective and represent the best analysis available at the time of publication.
What did you think of this feature? Write to SearchDataCenter.com's Matt Stansberry about your data center concerns at firstname.lastname@example.org.
This was first published in October 2009