Enterprise mainframes, and those responsible for them, are at the locus of a perfect storm. Despite earlier predictions, mainframes haven't gone away. They still hold up to 80% of corporate data and are arguably more important than ever. And heading right towards these assets are waves of information users whose jobs depend on applications that require timely access to that data, such as business intelligence, regulatory compliance, single views of "x" (customers, etc.), supply-chain optimization, and a host of other high-visibility IT-driven initiatives. And just to make it more interesting, even those IT groups with the budget and will to open up their mainframe assets are having an increasingly tough time finding skilled programmers to do the extensive and risky work. And so the storm swirls, for now.
Accessing mainframe data on a timely basis for nontraditional mainframe purposes has never been easy, nor cheap, nor in many cases even feasible. But this is changing, and rapidly. Soon it may be possible to get direct, flexible and scalable access to complex data in mainframe legacy applications quickly, without undue cost, and with minimal risk.
The fading reality—pain points galore
Traditionally, extracting data from mainframes has meant COBOL-based hand-coding -- lots of it. Even if the human resources can be found, the work remains time-consuming, costly and risky. The results, meanwhile, are usually brittle and non-scalable. It's hard to make changes in order to adapt to evolving end-user information demands. And it's hard to accommodate growing numbers systems and new end-user applications requiring the same mainframe data at the same time. To do so requires more hand-coding and, worse, adjusting to further performance demands on operational systems or requirements for additional mainframe horsepower.
The new reality—high-bandwidth native access
Today the elements are in place for an innovative new approach to mainframe access. It involves implementing a relatively simple framework that is flexible, scalable and non-invasive to transaction systems, and which enables external applications to use SQL (a universal end-user access "tool" if ever there was one) to get to mainframe data. With this new approach, the mainframe becomes like any other system in an open shared-data environment -- invisible.
The operative element is transactional control information, or meta data, which is leveraged to enable direct native data access. To get to desired data, you need only create a visual map between the mainframe data structures and an SQL representation. That way, SQL calls can be used to access the underlying data. You use meta data to map the data you want to pull out -- for example, a solution-specific mapping of elements such as particular tables and columns. And you then use the map to direct downstream data integrations and subsequent mappings into any number of external systems.
Among the primary advantages of this direct native access approach is that it eliminates hand-coding. This not only saves substantial time and money, but also introduces considerable flexibility. For example, it's extremely easy to change the meta-data-based visual mappings in response to changing needs. Data accuracy is also enhanced as the mappings make it very easy to align fields, convert dates, and so on.
Just as important, such an approach is inherently scalable. The data is read once, but it can be consumed many times by many different downstream applications and users. It's simply a matter of maintaining multiple mappings concurrently for different end uses. To preserve mainframe performance, you can read them in a single pass, in parallel.
Along the same lines, there is really no reason left to do data conversion (EBCDIC to ASCII) on the mainframe itself, as has been common in the past. This new approach makes it easy to move conversion processing off of the mainframe and onto lower-cost platforms such as Windows or UNIX system servers to preserve mainframe MIPS.
Protected any-to-any distribution
Once out of the mainframe, where can the data go? The short answer is to the right people and processes, at the right time. If you follow this approach, you have a framework not just for native data access but also for protected and easily managed any-to-any data distribution. Once unlocked from CICS, VSAM and other legacy sources, the data can be placed in a staging area off the mainframe. There ETL and data integration processes, real-time message queues, and external systems can access it.
Depending on the environment and data ownership considerations, mainframe owners can maintain full control via a "push" execution model. Or they can let external processes "pull" the data. Security can be maintained via data-stream security (encryption) and user-level security via familiar mainframe security packages.
Right time, at long last
In terms of on-demand access, the direct native access approach supports batch, real-time and even changed-data capture, all equally well. You're essentially creating an architecture to deliver what's needed, wherever and whenever. Flexibility to meet varying user requirements is built in.
Mainframe changed-data capture, in particular, has traditionally been very problematic. It can place enormous demands on mainframe processing resources and has customarily required serious and costly tinkering with legacy applications. Yet changed-data capture is essential to reducing data volumes and enabling on-demand data delivery for many end-user applications and business processes. Hence, a direct access architected approach enables a new paradigm for mainframe changed-data capture, one that eliminates having to tamper with application logic, avoids cumbersome file comparison techniques, facilitates seamless error recovery, and elevates design flexibility to a new level.
The onslaught of mainframe data does not look like it will subside anytime soon, but with the right plan for how to approach and access it, IT looks like it will weather the storm.