Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi,
we are currently facing performance issues in one of our largest apps in Qlik Sense.
We have a data model with one large fact table (>200mio rows) and some dimension tables. We already use a calculation condition to only show results if the number of selected rows is lower than 10mio.
If I'm selecting more and more filters, the performance is getting worse:
1. 1,2mio rows selected -> 550 rows in straight table -> result in 5 sec
2. 1mio rows selected -> 450 rows in straight table -> result in 10 sec
3. 500k rows selected -> 172 rows in straight table -> result in 15 sec
The filter dimensions are from different tables, but does anyone know why qlik is getting slower with more selections? How does Qlik Sense technically work when it is rendering the straight table and selecting the data from the data model?
@kai_berlin It will be difficult to say what's going wrong without looking at your data model and straight table statistics. Check the straight table statistics such as which measure is taking more time to render. You can get these statistics using monitoring apps. Mostly performance issues are related to complex calculated dimension or measure. May be you can try optimising it. Below is the link where you can find some techniques to optimise straight table
Also, please see below blogs to understand how calculation engine works
https://community.qlik.com/t5/Design/The-Calculation-Engine/ba-p/1463265
Hi @Kushal_Chawda,
do you have any suggestion for the monitoring app or do you know any app free of charge for this (I found the QSDA Pro only)?
Our measures are not that complex, there are different sales kpis in the following structure:
current year: sum({<Jahr={$(=Max(Year))}>} KPI)
previous year: sum({<Jahr={$(=Max(Year))}>} KPI)
YoY development (using master measures): KPI CY / KPI PY -1
Key fields are optimized already, calculation condition is in place as mentioned and there are no calculated dimensions.
Having read how the calculation engine works I would assume that it is not the aggregation which is getting slower but rather the selection which needs much more time with every additional filter selected. Is this plausible?
@kai_berlin I don't think there is any free monitoring app that provides measure level statistics except QSDAPro. Probably others can highlight if there is any.
Selections are the part of Qlik's associative engine so it is faster. It's your visuals which takes time to render depending on various factors.Only way to know that is use monitoring app. Your measure looks simple.Make sure that you don't have selections from disconnected tables. Because selections from Island table are bad from performance point of view. Also check memory usage. If you are on low side RAM, that can cause issue. I think identifying which selection/measure is culprit with RAM usage should give you some idea.
You can also see the effect by reducing overall data in your app.
@Kushal_Chawda we don't have any data islands in the data model, I already followed all suggestions mentioned here: https://community.qlik.com/t5/App-Development/Qlik-Sense-Straight-Table-rendering-very-slow/td-p/166...
RAM usage according to the App "Operations Monitor" was not that high, around 70% of committed RAM.
As written in my first post again the results of the selections:
1. 1,2mio rows selected -> 550 rows in straight table -> result in 5 sec
2. 1mio rows selected -> 450 rows in straight table -> result in 10 sec
3. 500k rows selected -> 172 rows in straight table -> result in 15 sec
Having less rows selected and less rows in the straight table results in a higher load time. I still don't get why this should be due to rendering.
Our worksheet also has some single KPI charts at the top. Even for those charts the load time is increasing with each additional selection, so from my point of view it seems to be the associative engine. My question is if that is plausible?