More success factors for handling Big Data and Data Warehouse Performance Part 2

Well the votes are in, 22 billion rows is big enough data. It’s not the billions of web logs rows of a Google or Facebook but its big enough for everyone. One of the comments that struck me was that one in a million happens 22,000 times. So whatever your criterion is for big data, it is more a state of mind about the amount of data as opposed to the actual terabyte amounts or the number of rows. Regardless of what database systems you work with, big is a relative term. Just ask your SQL Server, Oracle and Sybase DBA friends what they consider a big system. Usually the answer is nowhere near what you get for DB2 z/OS or even DB2 LUW systems. I talked about this a last year in this blog (_Performance is Relative ).

Other comments and questions received about last week’s blog asked for more clarification on the idea of keeping a database design simple. So below are three different ways to keep your big data data warehouse design simple.

First: There are reasons Bill Inmon’s and Ralph Kimball’s decentralized and centralized data warehouse ideas are so popular, those design patterns work. Design patterns for all types of IT applications, Java/.NET MVC (model view controller), various business models and standard processes have been extensively analyzed and endorsed over many years through the design pattern books, conferences and government studies. The decentralized and centralized data warehouse design patterns work and your design should use them for your data warehouse performance. Big data or not, there is no reason to do something more complex. Starting with these types of design patterns, using and optimizing simple Fact table(s) surrounded by Dimension tables(s) design pattern will provide you data warehouse performance. Decentralize or extend these design patterns with as many Fact tables and slow moving Dimension tables will optimize and minimize the amount of big data referenced in typical transaction and your data warehouse performance won’t be an issue.

Second: Make sure to normalize your big data design. It’s typical to try to consolidate everything within a data warehouse performance design. Unfortunately having too many elements in a table forces too much data into an application transaction and data warehouse performance can suffer. Just as decentralized and centralized data warehouse performance design patterns have been used for years, database table normalization has been around for even longer because it logically optimizes your design. The database design normalization process has been documented everywhere over the years and it is effective for making sure the table elements relate to the whole database table key(s). Combining table keys or designs causes excessive repeating data or data groups and over-normalization leads to excessive application joins. Normalization is striking a balance and no one does it perfectly the first time. Normalize your data warehouse performance design several times and your transaction performance can strike a balanced performance for all critical applications.

Third: Design, test, redesign test and repeat. Schedule enough database and application meetings and testing time to understand all the application transactions and reports. Data warehouse performance and modeling big data can get unwieldy, so testing your design early is vital. Sometimes big data table population numbers cause tools to abort. Cut the number of zeros down and model all your application transactions against your database design, build it and run applications against it. Data warehouse performance requires real life testing and actual running of the code or SQL that interfaces with the design. No one has any time to do it perfectly but everyone will be mad if you have to redesign and do it all over right before implementation. Know your performance before production through thorough testing. Big data and data warehouse performance requires design and testing. Make sure to do these several times during your development with as much big data as possible.

These are only some of the simple truths that insure your data warehouse performance for your big data system is a success.

 

 

 

I am speaking at the upcoming Baltimore Washington DC DB2 Users group meeting September 14th where I will be presenting “DB2 Performance Database Discoveries and Recommendations Part 2”. The new meeting agenda information should be posted soon but more information can be found at: http://www.bwdb2ug.org/index.htm

Coming up Sept. 29th DB2 10 for z/OS Performance Training and 30th SQL Performance Training two great DB2 classes are being offered in Washington DC: Sept. 29th DB2 10 for z/OS Performance Training and 30th SQL Performance Training. Get more information here (http://davebeulke.com/db2-performance-tuning-and-sql-training/)

Also I will be presenting at the International DB2 User Group (IDUG) European Conference in Prague. I will be presenting my “DB2 10 Temporal Database Designs for Performance” on Mon, November 14, 1:30 PM – 2:30 PM. For more details on my presentation click here.  I will also be involved in the data warehousing Special Interest Group (SIG) and the final discussion panel, so please join all the presentations, networking and discussions on DB2. Sign up now at www.idug.org.

Come see me in any of these venues and ask me your data warehouse performance questions.

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>