When you experience DB2 performance degradation, you’ll discover no one has changed anything (of course), or admits to changing anything. Luckily there are a wide variety of DB2 performance dependencies to check. When DB2 processes are running consistently well, then suddenly the DB2 performance reports show degradation, or users start to complain, there are many areas and components to evaluate and verify.
After you’ve checked that application developers have not put in any new code and all the usual DB2 performance factors checklist comes up negative here are three more DB2 performance areas to monitor as early warning signs of application performance issues.
- Verify buffer pools still have the same efficiencies. Buffer Pools (BPs) cache the I/O of the DB2 tables and their indexes. Keeping a daily DB2 performance report of their usage and efficiency is critical for spotting DB2 performance tuning opportunities. BPs that have poor I/O cache efficiency or negative I/O cache efficiency; or BPs that show no DB2 read or write engine available require DB2 performance tuning as soon as possible.
These conditions can be caused by a number of reasons: table growth causing object data set extents, table disorganization, or increased SQL or different SQL activity caused by new or different business activity.
To solve these BP issues start by identifying the DB2 objects in the impacted BPs. One at a time separate the table and index objects out of the BPs into dedicated or other appropriately sized BPs. To realize improvements of the I/O cache efficiency sometimes it’s best to expand the size of the BPs gradually to remove these DB2 performance problems and access your data more efficiently. I have written other articles on buffer pool tuning here and here. - Verify dynamic SQL access paths haven’t changed. As table data grows and real time statistics change, dynamic SQL access paths can change. Most of the time the DB2 access path improves along with DB2 performance. But unfortunately sometimes it doesn’t.
Review again your daily DB2 performance reports to understand the average, maximum, and minimum DB2 performance time of your applications via connection ids and correlation ids. Review the top 10 DB2 performance hogs within your environment and work through their DB2 performance, their SQL, and the DB2 tables they reference for improvement opportunities. By diving into the details of one of the top 10 DB2 performance hogs each week you can improve your DB2 overall environment performance dramatically. - Make sure DB2 suspend time hasn’t increased. As more DB2 applications get added and more applications come back to the mainframe, CPU and I/O workload requirements of the mainframe continue to increase. Since the mainframe capacity is limited, your DB2 performance may suffer.
One additional way to monitor the mainframe resources is through the amount of time your DB2 applications spends suspended. During this suspend time the DB2 application is waiting for resources doing almost absolutely nothing besides waiting. Tracking this dead time is critical because it’s inefficient, can exacerbate deadlock situations, and extend elapsed application time. Regularly evaluate the suspend time reported in the DB2 Accounting Performance Reports.
The suspend time is usually detailed for every application via connection and or correlation id. Usually, at the end of the DB2 performance report of all the individual applications there are details that summarize all the applications into a “Grand Total” portion of the report. Monitor the suspend time to understand how long the application and DB2 system may be waiting for overall CPU resources that are shared across all the other applications.
Monitor these three DB2 performance areas as frequently as possible to understand their minimum, maximum, and normal performance ranges. Any dramatic change in their DB2 performance figures can indicate further performance problems ahead.
Dave Beulke is a system strategist, application architect, and performance expert specializing in Big Data, data warehouses, and high performance internet business solutions. He is an IBM Gold Consultant, Information Champion, and President of DAMA-NCR, former President of International DB2 User Group, and frequent speaker at national and international conferences. His architectures, designs, and performance tuning techniques help organization better leverage their information assets, saving millions in processing costs.
I will be speaking at the Detroit, Cleveland and Columbus user group meetings in August. Below are the links to sign up for the meetings. I will be presenting “SQL Considerations for a Big Data 22 Billion Row Data Warehouse” and “Big Data Disaster Recovery Performance.”
- MDUG – Detroit (Novi), Michigan – August 20th – http://www.mdug.org/
- NEODBUG – Cleveland, Ohio – August 21st – http://www.neodbug.org/2_1/next.html
- CODUG Columbus, Ohio – August 22nd http://www.codug.org/
Also as President of the DAMA-NCR Washington DC user group I would like to announce DAMA Day September 16, 2014. Great speakers with topics you need to know!
- John Ladley – Using Enterprise Architecture to Manage Data Governance and Information Management
- David Loshin – Establishing a Business Case for Data Quality Improvement
- Catherine Ives – Understanding and working with the DATA Act
- Peter Aiken – The Case for the CDO
For more information go to www.dama-ncr.org.
Leave a Reply