Skip to main content
Announcements
Qlik Connect 2024! Seize endless possibilities! LEARN MORE
cancel
Showing results for 
Search instead for 
Did you mean: 
YB_Mo
Contributor II
Contributor II

Best Practices for Qlik Sense Application > 4GB size

Hello my colleagues,

I need your advice or a reference to a best practice use case.
Brief introduction.

We are using a DWH on SQL under qlik Sense. In this DWH (1,5 TB) we model the data in star schema and so we have "one" huge data model. In Qlik Sense we use data marts that represent only parts of this data model, such as production or purchasing. The strategy is to use one data mart for many applications. E.g. the Production Mart contains the data for 15 QS Production applications. The production data mart has 30 dim tables and 42 fact tables.

The problem we are currently struggling with is an application that is to be used from worldwide locations. Therefore, we can no longer use the "location" filter that we previously used to reduce the data load. If we load all the data of the data mart, the application grows to  4GB of storage and the reload time and performance of the application is too poor.

What we actually tried is:
- QVD Layer. This massively reduced the loading time, but not the performance of the app.
- Linked table. Old school, but without much impact on app performance.
- Access section. We were hoping to control the "location" filter via SA, but the QS app still loads all the data in ram and doesn't really run any better.

I have also heard about ODAG. I haven't tried it yet though because I don't want to have multiple copies of one app. I want one central app.


Is there a better approach/idea?

Thank you very much!

 

Labels (3)
1 Solution

Accepted Solutions
marcus_sommer

Applications with 4 GB are surely no small applications but also nothing with what an appropriate sized environment should have real problems and performing bad.

Your description indicates that your data-model and/or the UI aren't suitable designed. For example you mentioned a switch to a link-table model which didn't change anything significantly. Quite often comes a link-table approach with a worst performance - if there is no difference from a performance point of view it seems that your data-model is far from optimized.

Therefore I suggest to review the data-model and to develop it in the direction of a star-scheme. Further loading only needed fields and records, avoiding any record-id's, splitting timestamps into dates and times and some more measurements. Further preparing/pre-calculating everything possible within the script so that simple expressions like: sum(value) or maybe sum({< .... >} value) aren't enough and no aggr() or nested if-loops are necessary.

Beside this your used section access is probably not correct implemented because it should have a significantly effect on the performance. Not mandatory during the first opening because here is the reduction performed but afterwards is the available data-set smaller and depending on the degree of the reduction the performance should be faster.

- Marcus  

View solution in original post

3 Replies
abhijitnalekar
Specialist II
Specialist II

Hi @YB_Mo ,

Please go through with below link, it helps you to optimize the model and application.

https://predoole.com/2020/09/08/9-ways-to-optimize-qlik-performance-part-1-backend-layer/

Regards,
Abhijit
keep Qliking...
Help users find answers! Don't forget to mark a solution that worked for you!
marcus_sommer

Applications with 4 GB are surely no small applications but also nothing with what an appropriate sized environment should have real problems and performing bad.

Your description indicates that your data-model and/or the UI aren't suitable designed. For example you mentioned a switch to a link-table model which didn't change anything significantly. Quite often comes a link-table approach with a worst performance - if there is no difference from a performance point of view it seems that your data-model is far from optimized.

Therefore I suggest to review the data-model and to develop it in the direction of a star-scheme. Further loading only needed fields and records, avoiding any record-id's, splitting timestamps into dates and times and some more measurements. Further preparing/pre-calculating everything possible within the script so that simple expressions like: sum(value) or maybe sum({< .... >} value) aren't enough and no aggr() or nested if-loops are necessary.

Beside this your used section access is probably not correct implemented because it should have a significantly effect on the performance. Not mandatory during the first opening because here is the reduction performed but afterwards is the available data-set smaller and depending on the degree of the reduction the performance should be faster.

- Marcus  

YB_Mo
Contributor II
Contributor II
Author

Marcus, thanks a lot for your investet time and detailed reply.

That an 4GB application is not to huge for QS give me hope. Reading your advice, i will first try to implement the Section Access properly to review the influence.

You made my Monday, thanks again!