IBM DB2 BLU Best Database I/O Performance

As I talked about database I/O performance the last three weeks here, here, and here, optimizing database I/O performance involves many different factors.  When I left off last week I talked about the parameters for pinning your popular objects into DB2’s buffer pools to optimize and leverage memory for I/O avoidance.

After hearing all the IDUG presentations and talking to several friends using the new IBM DB2 Version 10.5 BLU Acceleration, it’s apparent that its database I/O performance leads the industry.  The performance figures, testimonies and real life experiences with DB2 BLU provided evidence of huge I/O and CPU savings.  The following four reasons are why everyone is adopting IBM DB2 BLU to improve database CPU, I/O and get the best overall performance.

  1. IBM BLU Acceleration data skipping technology eliminates I/Os.  The most efficient I/Os are the ones you can skip.  The new IBM BLU data skipping access technology skips data and their related I/Os for data you don’t need for an SQL answer.  This quickly eliminates and skips through your data to improve the CPU, I/O performance, and filtering for your SQL query WHERE clause processing.  By skipping unneeded data, the absolute minimum I/Os are done to provide your answer result set, saving query elapsed time, CPU and I/O performance.
  2. IBM BLU Acceleration leverages a columnar data store technology.  The new IBM BLU Acceleration columnar technology takes compression to the next level with 10x storage space savings. Some customers are getting 90-95% data compression for their large database tables.IBM BLU Acceleration’s new columnar table data store technology also completely eliminates the need for defining indexes on your data.  This is because it is faster to just scan through the compressed columnar data store.  By shrinking the overall database table size and eliminating indexes, the new IBM BLU Acceleration technology has a huge impact on operational backup and disaster recovery costs.  With 90-95% data compression, storage issues are a thing of the past and performance is dramatically improved.
  3. IBM BLU Acceleration also introduces Actionable Compression.  This patented compression algorithm preserves the data value order within the compressed data.  By preserving the data order, processing can start, skip (as mentioned earlier), and finish analyzing the data better and faster, only accessing and leveraging the needed data.  The actionable compression and the new data skipping technology eliminate I/Os while making any I/Os done more efficient by grabbing more data through compression.
  4. IBM BLU Acceleration technology leverages the Single Instruction Multiple Data (SIMD) technology.  This SIMD BLU technology leverages the latest manufactured microprocessor chipset features to issue multiple instructions and interrogate multiple pieces of data within the chip’s buffer with only a single instruction.  By having SIMD issuing a single command, operations are more productive in interrogating and processing unstructured data, and are also useful for helping performance on all data types within the chipset data buffers.

The combination of data skipping, the BLU columnar data store, actionable compression and SIMD processing are tremendous new unique performance enhancements to IBM DB2 BLU and the overall database industry.  All of these IBM DB2 BLU Acceleration features are providing customers today with great performance; sometimes processing queries that ran for hours now run in IBM DB2 BLU Acceleration in only seconds.

Dave Beulke is a system strategist, application architect, and performance expert specializing in Big Data, data warehouses, and high performance internet business solutions. He is an IBM Gold Consultant, Information Champion, and President of DAMA-NCR, former President of International DB2 User Group, and frequent speaker at national and international conferences. His architectures, designs, and performance tuning techniques help organization better leverage their information assets, saving millions in processing costs.

St. Louis DB2 User Group Meeting June 3rd
I will be presenting at the St. Louis DB2 User Group meeting June 3rd. There are two tracks one for DB2 z/OS and one for DB2 LUW presentations.

I will be presenting both my “Big Data Disaster Recovery Performance” and my “SQL Considerations for a Big Data 22 billion row data warehouse”.  Hope to see you there!

Sign up for the meeting at


Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>