Going Deeper into Improving Database I/O Performance

As I talked about last week, and the week before, tuning application and database performance by optimizing database I/O performance involves many different components.  When I left off last week I referenced that the physical I/O path components need to be tracked back through the FICON channels, the storage device cache and then to the storage device.

Digging deeper into DB2 database I/O this week there are three more items that need to get examined and tuned to get the best DB2 I/O performance.

  1. Get your I/O top requirements to the newest hardware components.  FIBCON connections that I mentioned last week are the previous storage generation connections.  The newest connection DB2 I/O performance leader is zHPF (System z High Performance Fiber Connection).  This zHPF connection is available with the newer faster DS8800 R6.3 storage device and increases the DB2 I/O performance dramatically. With zHPF and DB2 10, application performance can be up to 11 times faster.  DB2 v10 improved I/O performance dramatically and adding the new zHPF hardware matched with a new zEC12 processor, there is even more performance improvement, especially when referencing disorganized data.  Details of the dramatic DB2 I/O performance improvement are documented in the IBM Redpaper “DB2 for z/OS and List Prefetch Optimizer” and in this zHPF Frequently Asked Questions document.
  2. Sometimes a good access path does not equal good DB2 I/O performance.  Your DB2 I/O access path may not be optimum because the number of DB2 I/Os required has tremendously expanded because of database disorganization due to too many inserts, updates or deletes without reorganization utilities.  Since the database table and indexes are disorganized more DB2 I/Os are needed, they are more scattered across the objects and, as a result, very inefficient.  Make sure that your database conditions are properly reflected when the access path is developed/analyzed to fully realize the DB2 I/O work that is truly being executed.  Disorganized data is a performance killer and can make all the difference in the DB2 optimizer’s access path selection.
  3. Maximize hardware by pinning your objects into memory.  DB2 V10 introduced 1MB page frames within the System z for page fixing your various page sized buffer pools and their objects into these larger memory objects. The new mainframe zEC12 offers more main memory for utilizing the DB2 V10 page fixing and setting the buffer pool page steal options that can provide huge benefits to reduce CPU while eliminating your DB2 I/Os.  Pin your most popular objects to reasonable memory allocations that don’t cause your z/OS system to have too much overall paging.  Depending on your workload and your object size, pinning your objects into memory can reduce your CPU by as much as 20 percent or more while freeing up DB2 I/O for other objects.

Performance still comes down to designing the database for your most used workloads.  Database design is a trade-off between the processes that need to be executed. By examining these processes, prioritizing them, and then prioritizing the object designs to minimize DB2 I/Os and CPU for the most important processes.  No two businesses or databases are the same and the impact of database modeling, normalization, and separating/combining objects has DB2 I/O implications.  Just as a Formula One race car is designed for speed and a semi-trailer truck is built for bulk transport, your database design and I/O should be designed, configured and optimized appropriately for your business application processes.

Dave Beulke is a system strategist, application architect, and performance expert specializing in Big Data, data warehouses, and high performance internet business solutions. He is an IBM Gold Consultant, Information Champion, and President of DAMA-NCR, former President of International DB2 User Group, and frequent speaker at national and international conferences. His architectures, designs, and performance tuning techniques help organization better leverage their information assets, saving millions in processing costs.

  • Sign up for the IDUG DB2 Technical Conference in Phoenix, Arizona this May 12-16th.
  • Sign up for dinner with me during the IDUG Dine Around Dinner Thursday night.
  • Also plan on attending my presentation 1221-F04 “Big Data Disaster Recovery Performance” Tuesday 4:30 at the IDUG conference.

Also the IDUG European conference is still accepting presentation abstracts to be potentially picked for the IDUG EU conference in Prague, Czech Republic November 9-14th.

For more details on any of these items go to www.idug.org.

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>