Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi
In Sense System Performance Analyzer, there are many terms not familiar to me. Is there any documentation explaining all these terms?
For example, during a test, we found everything normal, but 'Sense Catch Hit Rate' is high. How to understand this?
Best regards,
Susan
From the description of the master measure:
> Sense Engine Cache Hit Rate indicates how much the Sense Engine is having to cache results for user sessions. A higher rate means more data is being added to the cache (AKA "warming up the cache"). In general, response times (as observed by a user) will improve as the Cache Hit Rate decreases because more results are already cached and ready to go.
Hi Levi
Thank you for the reply. Then next question is how to keep the Sense Catch Hit Rate down.
Our issue:
We have a "self service" mashup which allows users to create new charts on the fly by selecting dimensions and measures. Every time a dimension or measure is (de-)selected, a new chart is created. This is intended. However, it seems that under heavy usage, the solution becomes extremely slow. We have investigated the slowness, and it seems to be on Qlik's side: after playing around with the tool (creating ~50 charts), making a selection on the chart is very slow (it takes more than 10s to render a simple bar chart).In our code, we both close and destroy the previous chart when a new chart is created, so in our mind, it shouldn't slow down under usage. We use the model.close() and destroySessioObject(ID) calls. Could it be that Qlik is still storing the old charts in cache, despite us closing and destroying the objects?We can see from our monitoring apps that during this kind of testing, the Sense Cache Hit Rate is high, and any selection becomes very slow. So, is there a way to remove a chart from cache after it is closed? Or can we request Qlik Sense to clear the cache somehow? Or do you know what we could do to improve the performance?
Best regards,
Susan
It's going to be a difficult task. You can obviously have the app's base data model open already. That part is easy. The more difficult part is that all the underlying hypercubes used by the ad-hoc visualizations will not already exist, so there's nothing to warm there.
Your best option would be super tune the data model. There are a collection of best practices here: https://diagnostictoolkit.qlik-poc.com/
Thank you @Levi_Turner for commenting!
Can you clarify a bit? Since we are (to our understanding) closing/destroying the hypercubes of the different visualizations, why does the solution become slower and slower with usage? Logically, it should not slow down if we kill the old hypercubes each time we create a new visualization.So, are the old visualizations still somehow affecting performance/memory through cache? And is there a way to work around that?The app we are using is already well optimized.
Best regards,
Susan
@Levi_Turner Could you help to answer above question?
Thank you in advance!
@Levi_Turner any comment on this issue?
Thank you!