Skip to main content
Announcements
Have questions about Qlik Connect? Join us live on April 10th, at 11 AM ET: SIGN UP NOW
cancel
Showing results for 
Search instead for 
Did you mean: 
JockeRapp
Partner - Contributor II
Partner - Contributor II

Large data volume solution

We have a large Qlik Sense environment with a few thousand users and a team of dedicated Qlik developers, but we have a problem we're unable to solve so I need some suggestions.

The issue is the following: we have an app with slow performance due to large data volumes. The app is approx 7 GB, the main table in the star schema having around 250 million rows, 48 fields and a lot of unique values. All of these fields are used and data is analyzed on the lowest level. 

The app is being used in over 60 countries and in use daily by multiple users. Most users are using bookmarks to find their filters and analysis. Most of the users only have access to one specific country.

How can we optimize this? 

(Some thoughts: Loop and reduce do not exist in Qlik Sense. We haven't tried ODAG, but from what I understand it has a limit of 10 simultaneus apps and also users wouldn't be able to create/use bookmarks in ODAG apps.)

3 Replies
forte
Partner - Creator
Partner - Creator

Hi @JockeRapp 

As I follow, you are already using SECTION ACCESS in your document, in order to control access for different countries (if not, it should work better).

If so, I think it shouldn't be much more different having a single app with a SECTION ACCESS and a single app for each country (users are going to use same data indeed, may be when they are entering in the app would be faster).

If you want to try having various apps you can create a "pattern" app, you can use . in example , DocumentName() that will return app id and combine it with a table with assigned country for that app.

Anyway , as i mentioned, i'm not sure it could help you on perfomance...

Hope it helps

Regards

marcus_sommer

Usually the lowest level is only needed for the current data but not for the historical ones. Therefore a fact-table with a mixed granularity may be an option:

Fact-Table-with-Mixed-Granularity 

Another look should go on the cardinality of the fields - maybe they could be reduced in some way, for example splitting time-stamps, removing milli-seconds or other not needed digits with rounding-functions, deleting record-id's, grouping field-values, replacing keys with autonumber() and similar stuff:

The-Importance-Of-Being-Distinct 

Further you may save some ressources by removing the formatting from some fields because mixed fields store not only the value else also the format-pattern.

And of course you should check the UI especially if there are nested if-loops, aggr-constructs, interrecord-functions used - maybe they could be replaced with simpler logics and/or transferred into the script. Further if there are measures within your dimensions you should consider to transfer them to the fact-table.

- Marcus

 

JockeRapp
Partner - Contributor II
Partner - Contributor II
Author

Thanks for your suggestions!

Using a pattern app and do a "ghetto" loop-and-reduce is something we will be trying out, thanks @vforte  

@marcus_sommer  regarding the suggestion to have mixed granularity in fact table for hitorical purpose: as an architect/developer I like the idea. However, we know that some users will demand details on historical figures sometimes. Which leads us to be forced to have an app for detailed history somewhere. And if that exists, users will open it...