I'm sure you are aware this is normal QVS behaviour. QVS will take advantage of caching and it helps the users who login to access the application by positive performance. QVS will do caching until it reaches the Low and the High Set limit. (By default 70% and 90%). The purpose of having hardware resources is to take advantage and QVS does the same. However, it should delete the cache after reaching the threshold which is 90% by default. And generally Server becomes very slow when it reaches the limit and sometimes it will crash and only way to restore the system is by restarting the Server.
If you are having this issue where RAM is not being released then please check the below settings:
1. Allow Only One Copy of Document in Memory (This should be enabled because if you are creating multiple versions of the same document then it will hurt your Ram Usage).
2. Check if there is any Virus scan on the mounted folder and if possible raise an exception for this folder.
3. Use Windows PerfMon and check for the peaks by using Memory, Processor and Cache groups. Add the relevant counters and do some baby sitting with the Server to understand which document/ App is causing the issue.
I hope this helps!
Deepak has some good points. I just wanted to add that the "Document Timeout" setting does not affect cache, at all. Even if that timeout is hit, only the data model portion of the document will unload from memory - all accumilated cache generated by users will still persist, until the document (or a document with similar data) will load into memory again, then the cache will "re-hook" to this document again, and the obsolete portions of it will be flushed. Also; if the working set limits (70/90) is hit, QVS will only flush parts of the cache to get the memory usage below the limit that triggered the purge, not removing all cache for that document.
Many thanks Stefan. That makes sense regarding the "Document Timeout" not affecting the cache. I have some questions for you and please can you clarify them for my understanding...
- What is the algorithm used to flush the cache? Sometime back HIC told me that it uses...A weighted number based on last usage, memory cost (to keep it in cache) and CPU cost (to recalculate it). Is this weighted method still applicable to flush the cache or is it just simple FIFO?
- Do you recommend turning ON "Turbo Boast" setting for extra CPU cycles?
- When reaching memory usage to Working Set High plus, will QVS swap the memory on to disk?
Also, my personal observation is that turning off - Allow Only One Copy of Document in Memory setting is a killer when you have massive size applications. I have instances where RAM is not released at all.
Thanks in advance.
- I can't publish the algorithm used, but HIC is correct - it is based on cost and last usage, primarily. FIFO would not make a good scenario for QVS in that matter.
- I really can't answer that straight up. Some of those settings are very much depending on hardware platform and QlikView version. The boys at Scalability Center can answer that to a much better degree.
- QVS will aim at never paging anything to disk, since that degrades performance by factors of thousands instantly. The working set limits are in place just to prevent that kind of thing happening. That, and to make sure total memory starvation does not occur. However, if memory usage is aggressively increasing to levels beyond the low (or specifically the high) limits, there is no way of preventing the VMM in Windows to not page memory to disk if it deems it neccesary - all memory is virtual, and the VMM is the boss in all things related to it, especially when you get to a point where exhaustion is near.
Regarding the "Allow only one copy of document in memory" - yes. If unchecked, multiple copies of documents will be held in memory, to serve clients that might be using the "old" data. That will increase memory usage.
Hi Deepak, and Hi Stefan,
Thanks for taking the time to reply to my query.
We do have only one document allowed in memory checked, and have exceptions for the mounted folders in our AV software.
I think Stefan has made it clear to me what is happening:
"Even if that timeout is hit, only the data model portion of the document will unload from memory - all accumilated cache generated by users will still persist, until the document (or a document with similar data) will load into memory again, then the cache will "re-hook" to this document again, and the obsolete portions of it will be flushed."
I was not aware of this "re-hook" system; I guess this explains the ever-increasing RAM usage.
Out of interest, we have 96GB of RAM in our server, and the peak usage is about 40 active doc sessions. We have one main document, plus a couple of smaller, less frequently used docs. RAM usage tends to peak at about 80GB (low working set?). From your experience does this sound like a reasonable level of memory to be used by the server? Perhaps it's difficult to judge without knowing more about the specifics of the documents?
I may install another 32GB of RAM, I just wanted to justify the purchase and ensure we don't have any problems.
Thanks again for all your advice.