Seven things you should know about mainframe systems

Vendors are continuing to invest in mainframe platforms, but should you do the same? Find out why mainframe systems still deserve some serious consideration.

There are a lot of myths about the mainframe swirling around the IT industry. To add to that, mainframes are also under a lot of pressure from x86 and Windows systems. If you’ve already made the investment and are wondering which road to follow, there are some important factors to consider before you make a final choice. This tip covers seven key aspects that you should know about when using mainframes in the enterprise.

1. Mainframe systems are still being purchased
Even though the recession put a dent in everyone’s IT budget, the mainframe appears to be coming back with the economy. There are still thousands of mainframe users in the world and, according to a BMC survey on the relevancy of the mainframe in today’s data centers, most managers using them plan to buy more in the coming year.

Another good sign is IBM’s continued investment in the platform. Last year, IBM introduced zEnterprise to the world, which included blades from other platforms, as well as a healthy boost in processor speed. IBM is also constantly updating software, packing more features into the mainframe operating systems (OSes) and major subsystems.

2. Mainframe systems are virtualized
 IBM introduced its first major OS to support virtual memory, OS/VS1, in 1972. The flagship virtual machine (VM) OS, VM/370, came out the same year and remains a customer favorite. Its continued popularity is due, in no small part, to its efficient OS hosting, along with a flexibility that allows shops to automate and configure their systems.

In all mainframe operation systems, virtualization remains so deeply ingrained that no one can imagine operating a data center without it. Virtualization will remain a key mainframe technology as long as IBM continues to pack more processors and memory into smaller boxes.

3. The mainframe is a cloud computer
The “computing in a cloud” concept posits using computational resources and software as services without worrying about where they come from. Most mainframers will recognize this as the very model upon which mainframe systems were built.

The cloud also includes the idea of provisioning one-off personalized computing instances that can be returned and reused later. IBM mainframes provide this type of functionality through z/VM and IMS Batch Terminal Server.

4. Mainframes are efficient computing platforms
Big iron’s roots go back to the days when every cycle was precious and programs had to fit into 64 KB of central storage. As a result, the mainframe was built from the bottom up to efficiently utilize memory, storage and peripherals. For example, the z/OS dispatcher takes advantage of every interrupt or workload pause to quickly find the next unit of work waiting for a processor. In the 90s, IBM introduced the game changing Workload Manager (WLM). Previously, systems programmers had to explicitly assign computing resources to workloads. With WLM, technicians set performance goals (e.g., 90% of transactions under two seconds) and let the system manage the resources to achieve those goals.

As a result, mainframe systems run comfortably at over 85% utilization of computing resources. The asynchronous nature of the I/O subsystem can also drive devices at high rates of speed and throughput.

5. Mainframe systems are up to date
Many vendors and pundits would have you believe the mainframe is “old technology” that is sorely in need of modernization. They are wrong.

Mainframe processors continue to be very competitive with high clock speeds and fast caches. The CPUs themselves have many sophisticated optimization features, such as pipelining and “out of order” instruction execution, which are on-par with many modern x86 processors.

Mainframe software is also keeping up with technology advancements. IBM’s transaction processors support service-oriented architecture (SOA), Atom feeds and PHP scripts. There’s also support for “modern” languages, such as Java and C++. If you are interested in the latest computing fad, you will likely find support for it on the mainframe.

6. Mainframe systems are secure
Almost all of the mainframe’s functions and resources are protected through inviolable security capabilities. Not only does the mainframe’s centralized structure make security administration easier, there are features in IBM’s Resource Access Control Facility (RACF) that keep two or more rules databases in sync.

Software, such as RACF, Top Secret and ACF2, ensure no one will be able to get onto a mainframe without a valid logon ID and password. In addition, the structure of the system, and mechanisms for controlling authorized programs, guard against attacks that would work on the distributed platform, such as buffer and stack overruns.

The mainframe offers a full suite of cryptography functions, such as encryption, decryption, hashing, masking, etc., that are implemented in hardware to make them faster and keep the computationally heavy workload off the regular processors. This means that anyone wanting to steal cryptography keys must have physical access to the box. Accessing the box is difficult enough, however, IBM also included some self-destruct mechanisms in case someone tampers with the security-related hardware.

7. Mainframes are a hardware geek's dream
Mainframe systems, for better or worse, are one of the last bastions of “bare-metal” programming for the technically inclined. Systems programmers routinely view dumps and traces as a window into the system’s structure. Through a dump, a curious soul can chase control blocks, look at documentation and ponder the meaning of the various flags, addresses and counters contained therein. Dumps and traces are the fundamental methods of debugging on the mainframe and therefore, IT staff deems them as very valuable capabilities of the mainframe.

The mainframe is also one of the last places one may get a chance to see assembler language in action. Assembler is another view into the heart of the machine, and also poses as a challenge when trying to shave a few extra microseconds off an important routine. In some cases, Assembler is the only language a programmer can use to get system level work done.

Some would argue that these technical skills are outmoded in the days of graphical user interfaces and SOA. Yes, this extensive knowledge of technology isn’t needed 90% of the time, but when things really go off the rails, knowledge is invaluable.

ABOUT THE AUTHOR: Robert Crawford has been a systems programmer for 29 years.  While specializing in CICS technical support he has also worked with VSAM, DB2, IMS and assorted other mainframe products.  He has programmed in Assembler, Rexx, C, C++, PL/1 and COBOL.  The latest phase in his career finds him an operations architect responsible for establishing mainframe strategy and direction for a large Insurance company. He lives and works with his family in south Texas.

Dig Deeper on IBM system z and mainframe systems