Problem solve Get help with specific problems with your technologies, process and projects.

Punch up database performance by reorganizing data

A well-timed mainframe database reorg can result in much improved response.

Performance management has taught us that system performance at a crawl is almost as bad as a day-long system crash....

Turning on or adjusting the database reorganization on the mainframe takes surprisingly little time and often boosts performance.

Over time, vendors have added many ways to run reorg without user intervention. This feature makes reorg easy to overlook until database responsiveness declines to the point end users are dissatisfied and administrators are wondering why the new database versions don’t respond as they should. This tip will show how paying attention to reorg will save you some headaches.

It’s all about database optimization
Database reorg fixes data store sub-optimization — a problem as inevitable as death and taxes. From the moment you turn on your database for the first time, the storage of the data drifts further and further from its performance-optimized state. How does this happen? Here are a few of the typical ways:

The database stores proportionally more of one data type and proportionately less of another, than the proportions for which it was optimized;

  • We add new data types and apply the old (inappropriate) optimization techniques;
  • We use the database differently, especially when using new querying patterns that perform sub-optimally with the old data-store design;
  • Other data stores on the same disk compete for the same storage, leading to sub-optimal storage compromises for this database.

All these trends degrade application performance over time. In the case of an online transaction processing (OLTP) database with a steady stream of updates (like MODEL 204 in my day), a year was enough to create a 5% to 15% average dip in performance, and performance continued to degrade a similar amount every following year. From what anecdotal experience suggests, this was not too far from users’ experience of other popular mainframe and non-mainframe databases. In fact, on platforms like Prime, a poor version of virtual storage access method (VSAM) meant that its office automation suite would take 10 additional minutes to start on a system for every additional 100 users due to user data fragmentation on disk unless you did reorg – and users typically didn’t do reorg.

Applying a mainframe database reorg fixes the data store sub-optimization problem – for now. Reorg does this basically by looking at the database and optimizing storage for its current state. Reorg can change the indexing scheme, so indexes require fewer disk accesses and load faster from disk. Reorg can consolidate data on the disk so the disk cylinder read/write heads do not ping-pong from one end of the disk cylinder to the other, similar to the Disk Defragmenter in Windows. In addition, a reorg can change the size of the cache so the database is not constantly and unnecessarily swapping data between disk and main memory. But a reorg is only a point-in-time improvement: Time and utilization will eventually cause the database to lose optimization.

Reorg has improved since I first encountered it 30 years ago as an administrator of a CCA MODEL 204 mainframe database. Today, most (if not all) major mainframe database vendors have implemented automated and online reorg. This means many of your mainframe databases, depending on your version and whether you are required to turn on reorg in the first place, provide automated reorg on a roughly annual schedule, including reorg support for mainframe databases that need 24/7 availability.

However, there are some caveats with database reorg. First, while many modern database vendors provide automated reorg, there is no guarantee your database includes this feature. Even if automated reorg is available, there is no certainty a vendor-supplied reorg is optimal for your system. In other cases, the reorg you have is not set to run frequently enough, so you get a major performance hit. Reorg may be set to run too frequently online so your performance loss through overhead is worse than the performance hit from running it less frequently. Reorg may have never been turned on if the vendor didn’t remind you.

Database reorg considerations
So it’s time to schedule a yearly review of your reorg for your databases – including information management system (IMS) and VSAM ones. Here are some decisions to make:

First, decide whether you should schedule a reorg to run online or offline. An offline reorg can require quite a bit of downtime for the database and its applications. Moving to offline reorg only makes sense if you have a periodic slot when additional downtime won’t hurt. Likewise, running the reorg online slows down other processing. Moving to online reorg — where most reorg eventually goes — may not be a good idea if database performance is maxed out while serving important end users all the time or in a scenario where spikes in demand that must be handled cannot be predicted.

Next, decide whether you need to run reorg more or less frequently. You should have some feel as to the rate at which the database and data store change in a typical year. For example, how much the mix of different types of indexing has changed, how many updates occur, and how many new or altered directory metadata definitions are happening. Look at each of these factors, and then decide if the overall trend is to faster or slower change per gigabyte of data stored. If the database is changing faster, you should reorg more frequently; if slower, you reorg less often.

A rule of thumb is that every 1% increase in change means a 1% decrease in the time between reorgs, and vice versa. For online reorg, decide if it needs to be scheduled at a different time of day or a different time of year. Most mainframe database administrators experience a shift over time in overall mainframe workload distribution during the day; if spikes are starting to occur at a time when your reorg is scheduled, you should reschedule to a time with lighter utilization.

For example, if you’ve added a foreign subsidiary in England, there will be a new spike in workload about five hours earlier than the one in the United States. The same is true for time of year. For instance, other countries celebrate end-of-year holidays on a different schedule than the U.S., and so, as your organization goes global, the end-of-year rush tends to smooth out and lengthen. Usually, there are still plenty of nearby slots in the day or year where online reorg overhead has only slightly worse effects than your present reorg schedule did last year. Just be careful not to create intervals between online reorgs that are too long or too short in the process of rescheduling.

Finally, decide whether you need to change your database’s space allocation on disk. This is not really a reorg decision, but it handles a related problem that reorg cannot tackle. Specifically, some of the performance hit on your database may occur because other data stores compete for the same space. You can determine this by a simple “map” of your disk cylinders — especially if you have a storage area network with that information — showing how much your database’s data store is interspersed with other data, such as the extent that large email files are creating large gaps on disk between IMS records. If this is happening, you should try to prevent those gaps by redefining the database’s physical disk allocation. Your power to manage how other applications create this data may be limited, but one “brute force” approach is to allocate one contiguous “span” of physical disk to the database, forcing all other data to be stored outside that span.

ABOUT THE AUTHOR: Wayne Kernochan is president of Infostructure Associates, an affiliate of Valley View Ventures. Infostructure Associates aims to provide thought leadership and sound advice to vendors and users of information technology. This document is the result of Infostructure Associates-sponsored research. Infostructure Associates believes that its findings are objective and represent the best analysis available at the time of publication.

This was last published in February 2012

Dig Deeper on IBM system z and mainframe systems

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close