DB2 Case Study Introduction: DB2 Performance and Throughput

Every client is different and working in a variety of different environments helps me understand different systems, applications and DB2 database design performance aspects and what works and doesn’t work. Over the next few weeks, I’ll be highlighting for you how to improve the DB2 performance of a major DB2 systems, applications and databases and using my experiences at a large financial institution as an example. By reading this case study, you will be able to pick up information that you can use in your own DB2 environment.

 

Let’s first start with an overview of the system and the most significant recommendations.

 

Background

 

The company had a goal of reducing the CPU on the overall system, its CICS regions and DB2 workload.The architecture of the production environment was multiple application projects of diverse workloads from CICS, MQ Series systems and web applications using Java front-end applications referencing the mainframe DB2 system.

 

The system environment showed signs of stress from applications that have very robust processing requirements. For example, some of the CICS transactions were doing multiple-table SQL joins of many DB2 tables while others were using large memory footprints. Additional CICS application program errors and dumps caused these systems to increase CPU consumption and stress throughout the day.

 

Transaction-level statistics were minimal and needed to be developed to track their I/O and CPU consumption and growth as features were always being added into the systems. Many different system level DB2 statistics reports were developed and used to tune the systems supporting the application transactions.

 

So make sure your systems, applications and databases have enough DB2 statistics to measure the overall number of each type of transaction, how many CPU cycles it uses and how many I/Os it needs to process.

 

DB2 Database Design Issues

 

The current database configuration and object definitions also had many structures that hindered DB2 performance and overall throughput.These table and index definitions and their use of multiple (20 plus) extents, were unnecessary overhead for the overall processing.The database table definitions and maintenance considerations were in need of database reorganizations to improve levels of service. More scheduled and regular database reorganizations help to improve data and index clustering, overall CPU efficiency and reduce elapsed time and locking overhead on the complete environment.

 

Additional database design analysis needed to be done to evaluate database partitioning options for the high performance batch and online processing conflicts.There were a number of tables that had over one million rows that could be partitioned.Partitioning these tables would allow better data set extent management, spread out the I/Os and locking across multiple partitions and allow parallelism, therefore reducing elapsed run times.

 

DB2 database designs were also a major consideration within the system because of several different factors.These factors included the following:

 

  • The first major factor was that there were many indexes defined on the DB2 tables. For example, there were 11 indexes defined on the Customer Table.The industry’s best practice is to have up to five or four indexes for an OLTP type application against a single DB2 table. A limit of five or four is recommended because indexes require storage space and they must be changed to reflect every insert or delete operation on the base table. If an update operation updates a column that is in the index, then the index must also be changed. The overhead of manipulating many DB2 index structures in a high performance environment requires too much time and continues to increase as the database gets larger and larger.

 

  • The next DB2 database design issue was the lack of use of DB2 partitioning for the large tables. Partitioning large tables splits up the data sets, avoids locking issues and promotes parallel processing within the environment.

 

  • Next, for tables that have large read to write activity ratios, DB2 compression can save tremendous amounts of overall CPU and processing elapsed time. Compression can help reduce I/Os by allowing moved data to be cached in the buffer pools and referenced through every I/O. Compression can save CPU. The company had over a terabyte of potential tables that were candidates for DB2 compression evaluation.

 

Application Programming Issues

 

Most of the application programs within the system did not utilize SQL joins. Even with the abundance of indexes, the application programs are testing the data and joining it within the application. SQL joins, if done properly, can reduce CPU and process elapsed time tremendously. For example, it can take the run time for one program from hours to minutes.Training for the application programmers would be needed so they learn the proper SQL join techniques.

 

Application development management thinking of using JDBC Java processing should move towards leveraging the more efficient DB2 pureQuery techniques. DB2 pureQuery allows standard DB2 and Java programs to be compiled, and the SQL bound to the database for static DB2 Java processing. This technique doesn’t encounter the overhead of object verification, security authorization, and dynamic access path generation. DB2 PureQuery and static DB2 Java application coding is the best technique for Java web development. Again, education on proper use of this technique is imperative.

.

Summary

 

All of these architecture, system, database, application and SQL considerations provide the opportunity for CPU cost reductions.These costs reductions could be achieved through system tuning, database design analysis, application SQL documentation and application coding standards and reviews. Implementing these has the potential of saving tremendous CPU capacity and delaying a CPU upgrade.

 

  • Make sure to have valid DB2 statistics for your systems, applications and database tables and indexes for analysis as the system is implemented and grows.
  • Leverage DB2 partitioning for robust tables that play central parts in the processing to help ease locking and encourage parallelism.
  • Also leverage DB2 partitioning for larger tables within your environment.Any table over 500,000 rows might benefit from spreading out the I/O and partitioning the DB2 table.
  • Implement DB2 compression for most database tables except for high volume insert, update and delete table. Compression in most cases will save CPU and I/O over the lifetime of the data.
  • Use the new DB2 pureQuery product to compile and statically bind your DB2 Java applications.Save CPU, improve security and eliminate dynamic access path application problems today.

Leave a Reply

  

  

  

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>