Skip to main content
Announcements
Qlik Introduces a New Era of Visualization! READ ALL ABOUT IT
cancel
Showing results for 
Search instead for 
Did you mean: 
felcar2013
Partner - Creator III
Partner - Creator III

Performance issue in application of over 300 million records

Hi

i wanted to know, how would you deal with 4 big fact tables (over 60 million rows each)?  If i concatenate two tables, one of 60 million and the other or 90 mio records, is the performance better than if i keep both tables separated? How can i separate historical data from actual, without loosing history? Do you have any experience with linked tables? how can i implement this?

Opening a sheet object with many expressions inside (five to six columns calculating measures based on variables written in the script) takes between 5 to to 25 seconds. How can i speed this up?

I did the following to make the document faster:

I load from a sql source DB, save as QVD in say QW_1 and transform the tables as i need them. Then i load to  a different document (QW_2) where i create my charts and add dimension tables, directly from the SQL source.

I deleted fields, when i did not need them, i flagged fields to use them in my expressions, and converted complex keys to integers, i tried to avoid count distincts (however in many cases it is not possible, when i have many dimensions associated to a, for example, customer history table), and joined some tables. Total application includes over 300 million records, fact and dimension tables (over 20).  Application size 5.1 Gb

any comemnt very welcome

Felipe

1 Solution

Accepted Solutions
swuehl
MVP
MVP

Document chaining could indeed be quite useful for you.

For example, you can create an overview QVW on a higher aggregation level or without historic data.

Only when someone needs to look into finer granularity / older records, you can refer him to a second qvw, passing the selection state to the second document.

This will allow that majority of users work with a smaller, better performing application, while only the 'power users' work with the big data.

You should find some threads / documents about document chaining in this forum, e.g.

http://community.qlik.com/thread/53743

View solution in original post

4 Replies
swuehl
MVP
MVP

Some best practices are discussed in this thread.

There are also some best modelling practices documents under Documents section in this forum.

felcar2013
Partner - Creator III
Partner - Creator III
Author

Thanks for this

I checked it. There is a QV document on best practices (from 2011) and it talks about "document chaining". I could not find many discussion on this, and i only need basics, do you know if this can still be used? i work on QV 11 R2. I could not find anything on the Reference Manual, or my search function is not working

thanks

swuehl
MVP
MVP

Document chaining could indeed be quite useful for you.

For example, you can create an overview QVW on a higher aggregation level or without historic data.

Only when someone needs to look into finer granularity / older records, you can refer him to a second qvw, passing the selection state to the second document.

This will allow that majority of users work with a smaller, better performing application, while only the 'power users' work with the big data.

You should find some threads / documents about document chaining in this forum, e.g.

http://community.qlik.com/thread/53743

felcar2013
Partner - Creator III
Partner - Creator III
Author

thanks for the link and your help