In this all-encompassing mainframe guide, learn some of the terminologies associated with the mainframe that will...
aid you in effective mainframe management in your data center. Part 1 details the various IBM mainframe utilities, including the IEBCOMPR and IEBCOPY utilities that can be used in backup scenarios. Part II is a glossary that details classic mainframe technologies and clears up some of the more obscure IBM mainframe utility terminology. If you have any ideas or terms listed that you think should be added to the glossaries below, let us know.
Part I: IBM mainframe utilities
By SearchDataCenter.com, with contributions from Wayne Kernochan and Robert Crawford, Contributors
(These utilities may be invoked in JCL. A set of utility-specific commands may then be issued in the program after the JCL to carry out various utility-specific operations on the data sets.)
Data set utilities
IDCAMS is a utility that supports creation, population, deletion and management of VSAM data sets as well as Integrated Catalog Facility (ICF) catalogs. Commands that carry out IDCAMS tasks include DEFINE CLUSTER, REPRO and ALTER. Before IBM introduced VSAM, IEBISAM carried out similar functions for ISAM data sets. Because of VSAM's advantages over ISAM, VSAM and IDCAMS are now used except in legacy implementations.
- Changing names of VSAM data sets with IDCAMS and ALTER.
- Managing VSAM data sets with IDCAMS.
- Using the REPRO command with IDCAMS.
IEBCOMPR compares two sequential, partitioned or partisioned data sets to determine whether or not they are identical. This function is especially useful for backup, as it allows users to avoid backing up data sets that have not changed since the last backup. IEBCOMPR is not used as much as the SUPERC utility (an ISPF/PDF tool).
- Examples from IBM of using IEBCOMPR to compare data sets.
IEBCOPY is a utility that copies and/or merges partitioned data sets. It is commonly used as a tool to create backups of partitioned data sets. It may also convert load modules into newer formats. If a PDS runs out of space, IEBCOPY can compress it.
- Examples from IBM of using IEBCOPY to copy members in data sets.
- More on copying data sets with IEBCOPY.
IEBDG generates a set of test data set records. Any pattern of data can be input by the user to generate results that can be used for batch application testing and debugging.
- Examples of generating test data set patterns with IEBDG.
- One user's use of IEBDG for testing applications.
IEBEDIT copies data from jobs or portions of jobs to populate newly created data sets. The data in the data sets appears in the same order in which it was inputted.
- Examples, including copying job steps from multiple jobs, in creating data sets with IEBEDIT.
IEBGENER is a utility used to copy sequential data sets or members of partitioned data sets. For example, IEBCOMPR can compare two sequential data sets to determine if they are identical, and IEBGENER can then be used to copy the newer of the data sets to disk for backup if they are not identical. Also, with the right control cards, IEBGENER can convert sequential data sets into PDS members or edit an input file into an output dataset.
- A tutorial on copying sequential data sets with IEBGENER.
- Examples from IBM on using tasks within IEBGENER.
- The University of South Carolina's guide to using IEBGENER.
IEBIMAGE creates or manipulates IBM 3800 and IBM 4248 printer modules or images and stores them in a library for future printing.
- Printing an entire IBM-supplied module and other tasks with IEBIMAGE.
IEBPTPCH is a utility that prints (or punches on cards) parts of (or entire) sequential or partitioned data sets. Also commonly used for tasks such as screening for empty data sets and printing key records.
- Examples of using IEBPTCH to print partitioned data sets.
- Examples of punching a data set with IEBPTCH.
IEBUPDTE creates or changes data in partitioned data sets. It is typically used for creating/maintaining JCL and source libraries.
- Examples of using IEBUPDTE.
IEFBR14 is a dummy program that can be used for checking JCL syntax errors as well as defining or deleting data sets without a utility.
- Examples of using IEFBR14, including using the dummy program in deleting data sets.
- A new improvement of IEFBR14 in z/OS 1.11.
ICKDSF installs, initializes and manages direct-access storage devices (DASD, or disks). It can also be used for such tasks as initializing disk volumes and detecting disk-related system errors.
- Initializing DASD volumes using ICKDSF.
- A list of ICKDSF commands.
IEHINITT is a utility that writes magnetic tape label(s) in ASCII or EBCDIC.
- Using the RACF command to restrict certain users from IEHINITT.
- Examples from IBM of writing magnetic tape labels with IEHINITT.
IEHLIST lists entries in partitioned data set directories or in a volume table of contents.
- Examples of listing data set entries with IEHLIST.
IEHMOVE moves or copies non-VSAM data sets (DFSMSdss must be used for VSAM data sets). While IEHMOVE is similar in function to IEBGENER and IEBCOPY, IEHMOVE can move or copy data sets without pre-allocating the space for the output data sets, unlike the other utilities.
- Examples of using IEHMOVE to copy non-VSAM data sets.
IEHPROGM is a utility that builds and manages system control data such as catalogs, and can rename and delete data sets. IDCAMS has overtaken this as the cataloging method of choice, since IEHPROGM requires RACF authorization.
IFHSTATR formats and prints tape volume error information from type 21 SMF records.
- An example of printing SMF records with IFHSTATR.
SPZAP (Super Zap)
SPZAP can list, map and modify load modules (executable programs) or patch/amend volume table of contents (VTOCs). It may also zap or list physical records on DASD. This utility is primarily used by technical support personnel to perform system maintenance and fix broken DASD structures.
Part II: Mainframe terminology glossary
By Wayne Kernochan, Contributor
The following is a glossary, listed in alphabetical order, providing my own take on classic mainframe technologies, their virtues and their pitfalls, in order to provide some general background on mainframes and expand upon some of the terminology/utilities listed in Part I.
- 3270 -- IBM's "dumb" terminal product line. Because this terminal has no processing power, all the contents of a screen are sent to the mainframe at once when the user presses a certain key (e.g., ENTER); and when the mainframe responds, it sends a command to alter (or keep the same) the entire screen. As a result, all interactions via the 3270 can be thought of as "form entry": typing input in response to prompts on the 3270 screen, getting output that is displayed in form templates on that screen. This arrangement allows the mainframe to avoid the performance drain on processors from constantly attending to user inputs. However, it means that applications must "guess" what the user typed in between ENTERs, and this limits the responsiveness of the application to the user. Therefore, Digital with VMS and the Unix folks created operating systems that accepted and processed each character typed as it arrived, an idea that ultimately dominated the market when PCs performed much of the processing without involving a server. Nevertheless, because mainframe operating systems were originally designed to assume 3270s (the so-called "master-slave" architecture), 3270s and 3270-style applications and utilities have remained the norm in many mainframe shops, and moving these shops to GUIs (graphical user interfaces) and PCs or network computing has been a painfully slow process.
- Batch -- In the early days of mainframe use, users discovered that transactions like journal entries could be deferred until the end of the day and then applied to a data store all at once, in a "batch." For example, changes to a sequentially stored set of records could be specified as insert/delete/update, and batch jobs could scan over the records in sequential order and carry out the changes, without interruption by other processes. This meant extremely fast processing of the task. As a result, every weeknight and weekend became a "batch window," in which systems were taken offline (i.e., communications with users was severed in order to avoid batch-process interruption) to carry out batch jobs, including accounting, order entry and backup, as well as non-batch fixes to problems. Over the years, and with the advent of 24/7 user interaction due to the Web, many batch processes have been superseded by ones that run online. However, many have not, and as the size of data stores has increased, it has required strenuous efforts by administrators and IBM to keep the batch window from expanding until systems are always offline and no users can employ them at all. Although strictly speaking, backup is not necessarily a batch task, the biggest offender is the backup window. Note that batch jobs can be (and, these days, often are) run online, but require careful redesign so that other processes do not spoil the data they are working on.
- CICS -- This is a somewhat odd IBM product, in that it has not one but two distinct functions. First, it is a TP monitor -- that is, it multiplexes, load balances and routes transactions from multiple sources to back-end databases and applications. Oracle/BEA Tuxedo is the primary non-IBM example of a TP monitor. In newer architectures, this is often handled by the database engines themselves for data-type transactions, or by Web and application servers for other types of transactions. Second, CICS is an online process scheduler and development environment. In other words, in a typical mainframe data center, developers will use CICS to test and run their programs, sometimes together with operational processes also scheduled by CICS. This means that CICS' decisions about process priority can be highly important both to users and to developers.
- DB2 -- IBM's relational database, introduced in the mid-1980s. It is a typical enterprise relational database. The key points to remember in the context of the mainframe are that DB2 dominates the mainframe database market, and that the mainframe version of DB2, despite ongoing efforts by IBM, is still significantly different than the Unix/Linux and Windows versions.
- DOS, or DOS/360 -- One of three original mainframe operating systems, along with VM/CMS and MVS. It should not be confused with Microsoft's original PC operating system, which was also called DOS (for Disk Operating System). To distinguish the two, IBM's DOS was often called DOS/360 (after the mainframe's 360 line) and Microsoft's DOS was called MS-DOS. Note that despite the name, MS-DOS mostly derives from the Unix operating system, including its commands. DOS is a very stripped-down version of MVS intended for smaller machines. It has one notable peculiarity: the size of all data sets is specified as a fixed number in the JCL. This means that whenever new data causes a data set to expand beyond that number, the entire system must be taken down and the JCL for each job affected must be redefined. Nevertheless, IBM continues to maintain and upgrade DOS, and it remains highly popular among IBM mainframe customers with slow-changing data sets due to its ease of administration. At present, it appears to have evolved into z/VSE.
- IMS -- IBM's main database before DB2, and still highly valued by its customers. The key characteristic of IMS is that record schemas are arranged in a "tree" structure. For example, if a particular structure has a "teacher" record "root" with several "student" record "branches," the same structure cannot have a student record root with several teacher record branches -- that must be done with a separate structure. As a result, data stores with this type of many-to-many relationship are much larger with IMS, and performance in typical database tasks is slower. However, for run-the-business applications that do not typically involve many-to-many relationships, IMS is superior, and therefore many business-critical mainframe applications from the 1960s and 1970s continue to use IMS. IBM has periodically updated IMS to handle new environments such as the Web.
- ISAM -- Indexed Sequential Access Method. This was IBM's original data-management "method." IBM defined data as sets of fields grouped into records, each field having a value. ISAM indexes include an index entry for each such record, specifying the start address on disk of that record, and those index entries are sequential -- that is, the records are stored in order on the disk, and index entries are arranged in the same order. The initial index entries are grouped as nodes, with a maximum capacity. Thus, when a node reaches capacity due to added records, a pointer is inserted in the record itself to the next record in a "chain." As a result, over time, getting to a record requires more and more disk accesses and thus can slow application performance drastically. ISAM is still used in data centers in legacy applications from the 1960s, and a variant of ISAM was used as the basis for the open source relational database MySQL, now owned by Oracle.
- JCL -- Job Control Language. A set of specifications at the beginning of a job, or process, that indicate the resources used in the job. Today, new mainframe applications often hide JCL (that is, it is automatically specified by a compiler or by translation from source/macro code) because of its complexity. A line of JCL typically begins with '//'.
- MVS -- One of three original mainframe operating systems, along with VM/CMS and DOS. It is typically thought of as older than VM/CMS and therefore has more legacy applications, but is not thought of as optimized for newer mainframe needs as well as VM/CMS. At present, it appears to have evolved into z/OS.
- TCAM -- Tele Communications Access Method. Despite its name, this has nothing to do with data management. TCAM is similar to a networking protocol: It typically operates on the mainframe end of a conversation between the IBM mainframe and 3270 "dumb" terminals, handling communications between the two. More sophisticated data centers tend to use its variant, VTAM. Communication is "chunky" -- see 3270.
- VM/CMS -- One of three original mainframe operating systems, along with MVS and DOS. Originally, it was CMS, before IBM's development of virtual machines in the 8100 line in the 1960s/70s. Addition of virtual machine support made it more modern than MVS. At present, it appears to have evolved into z/VM.
- VSAM -- Virtual Sequential Access Method. IBM introduced this to solve the performance-slowdown problems of ISAM. VSAM does this by splitting nodes whenever they reach capacity. Note that this splitting involves placing half of the original node's index entries in new-node 1 and the other half in new-node 2, so that neither new node is near capacity. This is important because it ensures that applications that are constantly adding a record and then deleting it do not force the system to constantly split and "de-split". Although VSAM contains none of the bells and whistles of a database management system (now known as a database), it remains popular as a way to get the absolute maximum in performance out of a data-using application. VSAM-based applications may still be more numerous than those of any other mainframe data management product.
- VTAM -- Virtual Telecommunications Access Method. VTAM is an upgraded version of TCAM. It has the capability of handling guest access of anonymous users from outside a site. This idea can still be seen in Unix/Linux and Windows networking. As in the case of TCAM, the communication it handles is "chunky" -- see 3270. VTAM has been folded into the SNA communications stack, IBM's proprietary alternative to standards-based stacks such as TCP/IP.
- VTOC -- Volume Table of Contents. For tape (and later, but less frequently, for disk). The VTOC contains the information about which records, applications, logs, etc., are stored on a tape or set of tapes (volume) and the order in which they were stored.
- z/OS -- The mainframe operating system typically used in all z9s and z10s.
What did you think of this feature? Write to SearchDataCenter.com's editors about your data center concerns at [email protected].