DB2 System Analysis Case Study (cont.)

In this post, I’ll delve a little deeper into the system architecture and CPU reduction opportunities found in a major DB2 system at a large financial institution.


In order to do a complete performance analysis of the system, the statistics were reviewed using the standard DB2 performance reports. This data provided a basis of the various system, database, application and SQL observations and improvement recommendations.These statistics along with system, process, and application documentation, interviews with application programmers and observations of the workload guided the investigation of the CPU consumption and CPU reduction effort.

Current Enterprise Architecture

The enterprise architecture had evolved over the years to support many diverse database systems.This caused several databases to be cloned and their transactions workloads to be intermixed. This combination of CICS transactions provided a diverse workload of different data requirements, durations and applications.

This combination of workloads runs on a single CPU mainframe environment that supports both the test and production environments.Workloads come into the system through a variety of interfaces: CICS, Visual Basic and Web applications using MQ Series Connect and batch jobs throughout the day.These applications access a variety of database tables that support the corporation’s nation-wide business needs.

The enterprise applications environment with a mix of applications operates efficiently experiencing occasional dramatic CPU application requirement spikes.These application CPU requirement spikes manifest themselves throughout the day when CICS program errors occur and dumps are created.These dumps cause the system to pause and dump the transaction situation.This occurs too frequently; almost once every 15 minutes in the production CICS region.Busy business periods of multiple concurrent transactions with a large memory footprint also show stress within the systems.

Work Load Manager

The architecture of the system and its performance are controlled through a variety of software with Work Load Manager (WLM) playing a central role in overall system performance.WLM controls CPU and provides priorities of the different subsystems, online workload and batch processes.

Analysis of the WLM settings needed to be done to determine the optimum and most efficient workload software settings and determine whether the DB2, CICS, and batch transaction have the compatible settings to maximize throughput.

Observing the system processing discovered that the workflow accomplished is fluctuating when the systems has errors or dumps occurring in the various CICS regions.These dumps against the system workflow showed that the system CPU peaked and workflow was severely impacted.

When an on-line program error or dump occurs its core dump documentation and resolution are the highest priority within the system stopping or pausing all other work.An example of the problem occurred by 10:30 a.m. on a summer day. Five regions had 27 errors/dumps occur by that time, which is one every four minutes (27/150 minutes) during the production work day.Industry standards typically have a very small number of these errors or dumps occur in their production regions.This problem directly related to the application quality assurance testing and this situation will only continue to degrade the overall workflow and overall performance of the systems.

CICS Region Configuration and Allocations

The architecture of the CICS systems and the on-line programs reflects how additional data and capabilities have been added..New CICS regions and databases have been added to the workload as additional systems were added to the workload and additional features added to the applications.

These workloads were each separated into their own regions.To improve the overall workflow and provide further room to efficiently grow the CICS transaction workload a Sysplex architecture could be considered.The CICS Sysplex architecture separates the workload out to terminal owning regions (TOR), application owning regions (AOR) and data owning regions (DOR) that can be finely tuned to service each specific type of workload.These regions work together to spread and balance the peak transaction workloads.


All of these architecture, system, database, application and SQL considerations provide the opportunity for CPU cost reductions.These cost reductions could be achieved through system tuning, database design analysis, application SQL documentation and application coding standards and reviews. Implementing these has the potential of saving tremendous CPU capacity and delaying a CPU upgrade.

  • Analyze the number of abends, deadlocks and the number of dumps within different parts of your applications.These deadlocks and dumps take a tremendous amount of CPU resources at critical times within your system.
  • Make sure that your Work Load Manger (WLM) is set up properly to distribute the CPU resources adequately and properly to the various database systems and applications.Having the database at the same or below the applications can cause performance and overall throughput problems.
  • Validate the settings between your CICS transaction configurations.Make sure the maximum workload from one TOR, AOR or DOR is not overwhelming another CICS processing partner.


Dave Beulke is an internationally recognized DB2 consultant, DB2 trainer and education instructor.  Dave helps his clients improve their strategic direction, dramatically improve DB2 performance and reduce their CPU demand saving millions in their systems, databases and application areas within their mainframe, UNIX and Windows environments.


Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>