Most likely, the document get's unloaded (if not pre-loaded of course), but the cache generated for the document by users will remain - that is the memory you are seeing. Cache will never flush until the lower working set limit is hit. When the document is loaded into memory again, the generated cache will either be flushed or merged with the document again.
This is expected behavior.
So are we saying there is no way of forcibly unloading a specific application from the server memory. Restarting the QlikView Server service will do this - but will kick out current users of other applications!
If anyone from QlikTech is monitoring this then I would suggest this is something put into the QEMC. It's especially useful during beta testing to conserve resources on the server after monitoring RAM and CPU usage for a new application. (Obviously a separate testing server would be more appropriate - one where you can restart the service whenever you want - but this isn't always practical!)
Yes, that is what we are saying at the moment. It's just not as easy as putting it in the QEMC, there are some facts to consider.
1. Cache will never be cleared from QVS memory, unless a) it is rendered obsolete because of changes in the rest of the data model, or b) we come close to or over the configured Working Set Limit's in QVS. This is very much by design. Also, because of how the memory architecture works, it is not possible to throw out one documents specific cache segment on demand - it's simply too clever for that.
2. Document timeout will unload a document from memory (but leave cache). This time limit is configurable.
3. Preloading and document timeout can be combined to balance memory usage on QVS.
Trust me, this is being discussed at Qliktech R&D already, but functionality like this needs to be carefully considered and "use cased" before implemented.
Have we got any progress on this? We are on QV10 and we were getting poor performance when the server hit 70% RAM (based on working set). So I increased the working set to 90% (44GB to 58GB). The QVS service has promptly gone up to that limit and now started giving problems again. I could give it more memory once again - increase it to 72GB, but i guess the service will just get to 72GB quickly and start getting slow again.
If we don't restart the QVS service, it will keep building using the top level RAM.
Please advise, is there any practical way to clear the cache? Since we are a 24 x 7 operation, I wish to avoid a restart.
Is the ActiveDocument.ClearCache API any good?
Thanks in advance,
Your root cause problem is that the commit on the server is too high. Document data, cache, state data and so on - it's all to high for your server's available memory resources. Either decrease the load by optimizing document data models, archiving parts of data, or just increase the hardware size.
There are no sustainable way to "flush" cache that will not cause the same effect as restarting the QVS process.