Here is my two cents on this. Qlik is an in memory solution and everything is stored in memory for fast response. So, the working set is used to try to manage memory. When QVS hits the Lower Limits of the working set it will start to clear the accumulated cache of aggregated data to reduce the memory it is using. Once it has flush what it can the only thing QVS can do is continue to use memory. This is what is causing the slow down and poor performance.
So, if you have a poorly designed application it can use more memory than it really should and needs to be looked at to see what can be done to help the performance. Things like creating Flags in the load script and not using "if" statements.
If your server has the slots the other thing you can do is to increase the memory in the server.
I agree, Qlik is in memory, and it will use all the memory, the issue comes in very large deployments that the system caches dataset generated, where it is different to any currently in memory, hence is you have section access that restricts User1 to "UK" data and user 2 to "US" data, Qlikview creates 2 data sets. If User3 has access to both, then it creates a third. Therefore with an ever increasing user base, and increasing data volumes and complexity, its almost exponential in its memory requirement. Thus can it ever really be tamed?
We have thought of larger memory, more servers, but both can be twarted by additional aggregations being required or new dimensions, more complicated section access will compound further!
The more memory you have in a server, the more memory has to be searched to identify the candidates for removal. This takes CPU.
I have submitted an idea via the ideas profile Improve expression handling to facilitate better cache management.that describes (hopefully) a possible improvement that could be made, allow the developer to mark certain aggregations as volatile, so it would cache, but when Working-set min was breached, it already had candidates that would not be required, so could claim those initially.