Criteria for Determining the Commit Scope performance (part 2)

Last week was part 1 of the discussion on DB2 Commit Scope considerations found here.  This week the DB2 commit scope programming considerations discussion continues with batch process considerations. The following addresses some of the program requirement discussions that are needed to determine the best DB2 commit scope practices for achieving the best performance for your DB2 Java applications.

  1. Batch commit scope considerations
    Batch program commit scope is all about transaction integrity and how many transactions should be repeatedly performed within a commit scope block.  The batch program’s commit processing is necessary overhead and slightly extends the elapsed time of the processing.  Committing too frequently slows the batch processing down.  Not committing frequently enough causes the recover/restart-ability of the batch process to be longer.  So if your batch process does not commit frequently enough and if/when the process fails, the time to recover/restart the batch process becomes a big processing window problem.

    Analysis needs to be done to find the balance between overall performance and recoverability/restart-ability requirements.  This is especially important for the critical path of all nightly processing and especially critical for batch programs that involve a large number of objects since syncing all the objects can be very time consuming for each batch commit.

    The commit scope processing is usually coordinated through a company standardized checkpoint/restart routine or third party checkpoint product.  The checkpoint/restart routine usually registers all aspects of the processing situation and its commit position within the processing.  The checkpoint most likely notes the number of records read in flat files, IMS segments processed, and all aspects of the batch transactions performed against the database at the time of the commit.  Given the maturity of DB2 in many company’s environments there are usually several existing program samples available to copy and build new prototypes that will your processing needs.

    Make sure to incorporate batch commit scope processing from the very beginning of any program development.  Commit processing and restart-ability needs to be thoroughly tested and measured to experience the restart timeframe and performance impacts.  The commit scope restart-ability testing is a long process considering the complexities of restarting with flat files, processes that reference many databases/tables, multiple operating systems platforms, and all the various restarted interfaces.  I once helped debug a process that referenced Windows SQL Server, UNIX Oracle, and DB2 for z/OS along with flat files on each platform.  Testing its commit processing and restart-ability was difficult because of the number of objects, their different attributes and complexity of the combinations of possible errors that needed to be tested.

    It is also very important to note all the interfaces and their settings.  Remember to note the JDBC properties that I talked about in a previous blog ( and mentioned again in last week’s blog (  Having different test JDBC properties from production JDBC properties can dramatically impact DB2 commit behavior, recovery scenarios, and performance, so make sure to note all your DB2 Java, COBOL, CICS interfaces, and their settings within every test, QA, and production environment.
    Extra documentation and DB2 commit scope analysis of the overall configuration and memory settings should be done when running against a Linux/UNIX environment.  A lot of these Linux/UNIX servers are operated within virtual machine (VM) shared environments, and their shared resources and memory model can impact all aspects of your DB2 commit scope and performance.  Within these VM environments, there are many operating system settings on a number of servers running on limited CPUs’ memory’s hardware, JVMs within their configuration, Garbage Collection settings and memory allocations that can be used with your Java program execution.

    Within these VM environments extra Java batch processing DB2 commit scope analysis is strongly recommended.  This extra analysis is needed to understand VM environment hardware infrastructure supporting your workload.  Analysis is needed to examine any competing VMs on the same hardware and the memory requirements of all your Java objects and their peak workloads during processing, and restart- ability.
    Also make sure to validate if your VM sharing resources and to test with the minimum and maximum workloads on your VM alongside the maximum workloads with the other shared VMs.  By testing the shared VM environments at maximum workload over allocated or misconfigured VMs, you can identify their CPU constraints, memory over-allocation issues or I/O bottlenecks where they’re exposed during testing instead of when it fails during production.
    Also be aware of VM VMotion which is a VM infrastructure software that can move your VM to another underlying hardware configuration.  The movement of your processing by the VM VMotion can change all your performance metrics, your memory configuration, and depending on the UNIX/LINUX administration/automation, does it automatically.  This can be a major issue because your DB2 processing commit/restart-ability may have great performance one day and another day fail miserably.

    The VM environment’s Linux/UNIX logging and other debugging level settings can have a huge performance impact and need to be documented to verify all the objects are being processed and restarted appropriately.  These Linux/UNIX, VM, and Java configuration and memory settings need to be evaluated against the large number of objects and interfaces used to determine what is needed for recovery operations during a restart.  Sometimes restarts can need a bigger memory footprint so this is another reason why DB2 commit scope testing needs to be started from the beginning of the process development when required resources are being determined.

    Sometimes the test VM systems can be wide open with their memory and CPU configurations and sometimes the production VM environments are locked down so VMs can be managed and moved around within the large VM server farm environment.  Make sure your testing understands and documents all the VM and Java program requirements, operating system configuration and memory settings to satisfy all the memory requirements in a shared, possibly robust, constrained high performance test and production environment.

Next week we’ll discuss coding correct SQL and other important considerations concerning commit scope.

I will be giving a security seminar and two presentation at this year’s IDUG conference.  Make your plans and sign up for the IDUG DB2 Technical Conference Austin, Texas coming up this May 22-26th 2016.  Also plan on attending any of my sessions.

  • “How to do a DB2 Security Audit”
    Half-day seminar Tuesday 2:15-4:30 PM, Room:  Frio
  • “Performance Enterprise Architectures for Analytic Design Patterns “
    Presentation Wednesday, May 25th 1:00 – 02:00 PM,  Room: Pecos
  • “DB2 Security Best Practices: Protecting Your System from the Legions of Doom”
    Presentation May 26th  8:00-9:00,AM, Room: Trinity A

For more details on any of these items go to


Dave Beulke is a system strategist, application architect, and performance expert specializing in Big Data, data warehouses, and high performance internet business solutions. He is an IBM Gold Consultant, Information Champion, President of DAMA-NCR, former President of International DB2 User Group, and frequent speaker at national and international conferences. His architectures, designs, and performance tuning techniques help organization better leverage their information assets, saving millions in processing costs.

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>