This column originally appeared on TechTarget's Expert Answer Center as a post in Robert Crawford's blog. Robert served as the on-demand expert on the Expert Answer Center for two weeks in January to February 2006, during which he was available
to quickly answer questions on CICS application performance and design, as well as to write daily blog entries. Keep an eye on the Expert Answer Center for topics that could help your IT shop.
When IBM first announced parallel Sysplex and Workload Manager we figured we would be able to get rid of our performance team in a few years. WLM would take care of workload distribution and the parallel Sysplex would provide the redundancy needed for availability. All we would have to do is occasionally buy more IBM hardware and make the salesman happy.
What we didn't realize was that parallel Sysplex came with the one, big assumption that all the LPARs would be configured the same. I don't know about your shop, but with ever-increasing software costs we find ourselves moving heaven and earth to get the expensive software on the smallest possible processor. We also don't cotton to the idea that we should have to configure each LPAR to handle the entire production workload at one time.
A good case in point is our very CPU-intensive CICS application. For performance and availability reasons we run it on 20 application owning regions (AORs) on four machines. The application's input comes from Transaction Gateways (CTG) clients in WebSphere. Between WebSphere and our CTG server, we use Sysplex Distributor (SD). Sysplex Distributor queries Workload Manager so it can route messages to the machine doing the least amount of work. The CTG servers connect to Listening Regions (LRs) whose job is to route work to the AORs. The problem is that SD, on the advice of WLM, will make very dramatic workload swings. I can tell you I've watched the transaction rate in an LR go from 60 tasks per second to almost nothing because someone submitted a CPU bound batch job.
Now, if we went with IBM's plan for parallel Sysplex we would have to run 20 AORs on each machine, making a total of 80. Instead of going with this unwieldy and inefficient configuration, we cheated. We put three Listening Regions (LRs) on several machines and wrote transaction routing exits to round-robin the workload to the 20 AORs. We control an LPARs workload by moving the AORs around. Thus, if one system gets an upgrade so it can handle more work, we just move more AORs to it.
Granted, IBM has done a lot of things to improve WLM and its associates in the past few years, but there's more room for improvement. Primarily, software that uses WLM information needs to recognize that CICS and other high priority work will get CPU even if the processor appears to be very busy. It would also be nice if we could exercise more control over the "workload balancing." Sometimes, a stupid round-robin algorithm is the best one to use, and we would rather be able to predict where the work is going. Affordable software would be nice, too. If we didn't have to worry so much about some expensive batch reporting system used by three users, we could finally get to the point where all our machines are configured the same.
This was first published in February 2006