Skip to main content
Announcements
Global Transformation Awards! Applications are now open. Submit Entry
cancel
Showing results for 
Search instead for 
Did you mean: 
Not applicable

Qlikview Cache - Large data with many users and section access

After readingHenric Cronström excellent post The QlikView Cache I started to think about issues we have with our implementation of Qlikview

Our organization has issues with memory utilization on our Qlikview server deployment.  We do have a large amount of data, and control access to that data with Section Access, mainly filtering on regions the user is allowed to see.

Our issues are basically the memory grows over time, and passes "Working Set Min" and the level moves towards "Working set Max", the application slows down as memory management processes start, sometimes the application stops. Qlik support tell us this is Qlikview "working as designed"  and I understand this, it just does not help us...

The Qlik support answer is to tune our application, and remind us that it is "working as designed", our challenge is how to fulfil our business requirements.  We can increase memory.....,, we can deploy over more servers, spread the load, or we can tune the application.

I can see now, thanks to the great articles I have read from Henric, how we can tune further, and that ensuring we have a common set of aggregations, and that each aggregation has the same spelling and format, would help reduce the occurance's of the same dataset being cached multiple times because of different hashes.  So we can do that and test the result.

However, one comment in Henric's post rang an alarm bell for me.  If different regions are selected in the aggregation (will be controlled by the SECTION ACCESS, not set analysis or user selection) then multiple cached objects will be created ( understandably as they are different result sets)

Therefore with an increasing userbase, increasing dataset, and potentially more permutations in the SECTION ACCESS, is this the root cause of the increasing memory consumption on our applications.  Does this match any experiences you may be having with memory utilization?

What are your thoughts, do others experience this issue?

Is the perceived memory leaks that people talk about actually due to multiple caches of memory being generated by users being filtered by section access?

Richard

5 Replies
Bill_Britt
Former Employee
Former Employee

Hi Richard,

Here is my two cents on this. Qlik is an in memory solution and everything is stored in memory for fast response.  So, the working set is used to try to manage memory. When QVS hits the Lower Limits of the working set it will start to clear the accumulated cache of aggregated data to reduce the memory it is using.  Once it has flush what it can the only thing QVS can do is continue to use memory. This is what is causing the slow down and poor performance.

So, if you have a poorly designed application it can use more memory than it really should and needs to be looked at to see what can be done to help the performance. Things like creating Flags in the load script and not using "if" statements.

If your server has the slots the other thing you can do is to increase the memory in the server.

Bill

Bill - Principal Technical Support Engineer at Qlik
To help users find verified answers, please don't forget to use the "Accept as Solution" button on any posts that helped you resolve your problem or question.
Not applicable
Author

Thanks Bill,

I agree, Qlik is in memory, and it will use all the memory, the issue comes in very large deployments that the system caches dataset generated, where it is different to any currently in memory, hence is you have section access that restricts User1 to "UK" data and user 2 to "US" data, Qlikview creates 2 data sets. If User3 has access to both, then it creates a third. Therefore with an ever increasing user base, and increasing data volumes and complexity, its almost exponential in its memory requirement. Thus can it ever really be tamed?

We have thought of larger memory, more servers, but both can be twarted by additional aggregations being required or new dimensions, more complicated section access will compound further!

The more memory you have in a server, the more memory has to be searched to identify the candidates for removal. This takes CPU.

I have submitted an idea via the ideas profile that describes (hopefully) a possible improvement that could be made, allow the developer to mark certain aggregations as volatile, so it would cache, but when Working-set min was breached, it already had candidates that would not be required, so could claim those initially.

Richard

luciancotea
Specialist
Specialist

Using Publisher's feature of "Loop and reduce" doesn't help?

Not applicable
Author

Lucian,

"Loop and reduce!" would not help us, it reloads very frequently, less than an hour, has many users defined, with complex Section access, and is a very Large dataset.  We do split out by Month/year though.

Thanks

Richard

Not applicable
Author


All,

We are reviewing some problem documents, but i was wondering what people thoughts were on the Cache management of Qlikview when you may have a large section access file, thus the Cached Aggregations become unwieldy?

Richard