Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi everyone,
I’m currently facing significant performance issues in a Qlik Sense app and would really appreciate any insights or suggestions.
After multiple optimization efforts to reduce data volume, the app size is still around 2.1 GB. The main problem lies in a specific sheet that contains a pivot table. The underlying dataset for this table currently has approximately 36 million rows.
Due to stakeholder requirements, the table needs to be highly flexible:
Because of this flexibility requirement, we are using the standard pivot table object, as the alternative pivot table does not support this level of interactivity.
We have already removed most of the formatting logic from the dimensions to improve performance. While this helped somewhat, we are still experiencing:
What’s particularly confusing is that this app was migrated from QlikView. In the original QlikView version, even more data is loaded, yet we never encountered memory limit issues there.
At this point, I’m looking for any ideas or best practices to improve performance, especially regarding the large pivot table with many conditional dimensions.
Please feel free to ask any follow up questions if more context is needed. Thanks in advance for any help!
Hi @Eny396
QlikView and Qlik Sense are not the same product. They have some main differences, and one of them is about Pivot Table management. In Qlik Sense, pivot tables are among the most memory-expensive objects, especially when Many dimensions are conditionally evaluated or rendered dynamically and when the dataset is larger than 10M rows.
QlikView could handle this because It pre‑evaluated much of the pivot logic and it relies less on real‑time recalculations.
The main concern I have about your design is the combination of the 36M rows with 40 dimensions + show conditions + user-controlled ordering. Even when a dimension is hidden, Qlik Sense still evaluates expressions and locates memory.
I would suggest to split the pivot table in 2 or 3 smaller pivots with different levels of aggregation.
Possibly, use straight table whenever it is possible.
I see that someone suggests to use the Pick() function to collect the dimensions in 3-4 "meta" dimensions active at a time. This should reduce the memory usage and the calculation costs. Unfortunately, I have never tried this approach, so I can't share details. Maybe some in the community can help.