Q
Get started Bring yourself up to speed with our introductory content.

What is the result of a CPU bottleneck on the I/O process?

What's the impact on I/O when you run in a CPU-constrained mainframe environment?

The I/O process affects workloads in a CPU-constrained mainframe unequally.

For batch processes in a CPU-limited system, input/output (I/O) makes a bad situation worse. Generally the lowest priority in the system, a batch workload barely runs when there's a CPU bottleneck, as it competes with online transactions. With every I/O, a job gives up the CPU and higher-priority work takes it over. Even if the I/O process is completed quickly, the batch job must climb up the dispatcher chain for another chance at the processor, which slows down the workload's completion.

Online transactions typically get the necessary CPU time. The I/O process won't slow down a Customer Information Control System (CICS) or Information Management System (IMS) workload because each performs asynchronous I/O.

However, the effect of CPU bottlenecks on individual transactions may be the same as for batch. With CICS, once the I/O completes the transaction, it has to climb CICS' dispatcher chain before it proceeds. This is especially noticeable in CICS workloads that have deep dispatch queues.

Transactions with IMS run in Message Processing Regions (MPR's) take a different path, but the results are the same. Once the database I/O is complete, IMS posts the MPR, which then competes with all the other address spaces on the mainframe before it can do more work.

The fastest I/O is the one you don't do. Therefore, to prevent a bottleneck in a CPU-limited environment, take advantage of data in-memory wherever possible. In-memory data includes buffer pools, reference tables, caching or data spaces and other options.

About the author:
Robert Crawford spent 29 years as a systems programmer, covering CICS technical support, Virtual Storage Access Method, IBM DB2, IBM IMS and other mainframe products. He programmed in Assembler, Rexx, C, C++, PL/1 and COBOL. Crawford is currently an operations architect based in south Texas, establishing mainframe strategy for a large insurance company.

Next Steps

Where to start with in-memory data

In-memory databases explained

A case study of in-memory technology

This was last published in February 2015

Dig Deeper on IBM system z and mainframe systems

PRO+

Content

Find more PRO+ content and other member only offers, here.

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

Join the conversation

3 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

For the I/O done by batch jobs, would appropriate use of JCL DD parameters (a) reduce the quantity of I/O's by combining multiple records into a larger block and (b) reduce waiting for I/O completion by reading upcoming blocks before they are asked for and writing filled blocks asynchronously? I'm referring to parameters such as ACCBIAS, BUFND, BUFNI, BUFSP, RECFM={FB|VB}, SMBDFR=Y, SMBVSP, BLKSIZE, BFTEK=A, BUFNO,and BUFL
Cancel
I find it's better to let batch processes run on a system that has occasional spikes of high priority processing, regardless of its CPU situation.
Cancel
Runs with more virtual CPUs cluster members on the same or different  system z servers
Cancel

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close