according to my so far understanding qvs.exe is responsible for accesspoint service.
Any document opened in accesspoint with any user leaves information in qvs.exe which leads to a always higher memory usage up to low working set limit in QMC.
In our example we have set the limit to 70% (I think standard solution) which results in ~ 90 GB RAM "dedicated" for QVS.exe; our system has a total RAM of 128 GB.
So far there is no problem (if I got the keyfacts right).
QVB.exe is responsible for scheduled tasks and for each reload task a new instance of QVB.exe is opened (with separate need of RAM as I can see).
Now comes the tricky point: As I understand there is no kind of "sharing or communication" between QVS.exe and QVB.exe.
In some cases we have the total limit of 70% QVS.exe used which results in remaining 38 GB of RAM for the rest of the system.
At month end closing there are reports reloaded that might need up to 48 GB of RAM for a short time. As there is no interaction from QVB.exe towards QVS.exe this behaviour has the following results on our system:
- RAM usage about 97 %
- services are cancelled (can be seen in QMC as long as this remains up)
- QMC is cancelled (unreachable in any ways)
- remote access to the server becomes impossible
- access point is down and won't restart automatically
There are some minor cases when all other processes get kind of alive again and seem to be working, BUT access point always stays down until QVS.exe is restarted and sometimes there is a strange behaviour in QMC that doesn't show the true state of tasks and nothing can be cancelled or started.
In any case the safest solution for a stable working environment seems to be restarting the services via batch-file:
Probably not what do you want to hear - you could reduce the max. value from the working sets and/or increasing the RAM. Another approach would be to give the publisher an own dedicated machine. Whatsoever you need to avoid that the server reached 100% of the RAM and/or remained for a longer while by 100% of CPU consumption because not only the Qlik internal communication between the services may break else also the OS could become defect. In the end the available resources must be bigger as the peak-consumption within your environment. Otherwise you will need to live with a more or less risk of instability.
Beside this you could try to optimize your applications particularly the ETL part. This means using incremental logics by all heavier extractions/transformations and slicing either the data as well as the ETL applications to keep them smaller and to be able to distribute them more granular to the update/maintaining time-frames.