Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi everyone,
I have a Qlikview Server running with serveral Reports on it. Since two days the QVS.exe is sporadically using the CPUs so heavily that the accesspoint gets slow and opened reports don't react anymore at all. This workload stays like this for at least 20 minutes, after this time period I killed/restarted the Qlikview Server Service, because I needed the system back on track for the users to work on it.
Does anybody know how I can find out what causes the high CPU workload?
Qlikview Performance-Logs, Windows Event Logs and Resource Monitor only give me the information that there is a high workload coming from QVS.exe, that there is more than enough RAM and that there are not more active users than usual, I get no information about which report/user is causing the workload.
We did not deploy any new reports in recent time that would explain the sudden change in the servers behavior and on some days the issue does not appear at all, on others several times a day.
If anybody could give me a hint to solve this mystery, I would be really grateful.
If you need any further information, let me know.
If you have a lot of reports loaded at the same time there is no good way to seperate CPU usage for each report in my experience.
Have had those types of problems and my approach has been to investigate the QV-Eventlogs (or via Governance Dashboard) and try to figure out which report or user has logged just before problems occur. Have had instances when root has been found there but have also had times when nothing obvious can be found.
My first approach for getting rid of problems when no apparent reason can be found is to run the SharedFileRepair that comes with QVTools and replace the shared files where there are complaints.
hi
one thing to look at ,is the sharefiles of the qlikview apps
sometime a bookmark or an object a user created can cause the share file to become huge
and in turn they might stuck the server
Is your setup allowing "Server Collaboration", i.e. the creation of Server Objects in the AccessPoint? If so, then a simple but badly written expression in a Server Object can run off with all ram and/or cpu cycles. And they may be difficult to spot.
If you think this could explain the behavior you are experiencing, try to determine when exactly the cpu goes haywire, and who is actively using a document in the AP at that point in time.
Yes, the creating of Server Objects is allowed and activly used. There are >1000 User objects.
I will try to find out which users were active in the peak times via Governance Dashboard, deactivate their objects and wait a couple of days to determine wehter your tip helped or not.
Thank you!
So here I am, back again. After a few weeks of silence, the problems have come back.
After playing detective for a while I've come to the following conclusion:
I need to know how much CPU which report is using in the course of the day.
Is there any way to get this information?
If you have a lot of reports loaded at the same time there is no good way to seperate CPU usage for each report in my experience.
Have had those types of problems and my approach has been to investigate the QV-Eventlogs (or via Governance Dashboard) and try to figure out which report or user has logged just before problems occur. Have had instances when root has been found there but have also had times when nothing obvious can be found.
My first approach for getting rid of problems when no apparent reason can be found is to run the SharedFileRepair that comes with QVTools and replace the shared files where there are complaints.
As far as I know CPU isn't tracked at that level. However, there is a CPU_spent__s_ field in the Governance Dashboard that links to a session, so you may be able to pinpoint the problem session and work your way to the object from there.
Additionally, several things to look into:
1) As suggested previously, user objects that are poorly thought out may be a problem. This is particularly true of detailed charts that may get very, very long when selections are cleared.
2) Developer objects may exhibit the same behavior - if you have any detailed tables (pivot or summary), consider a calculation condition limiting the chart to only run if e.g. 10,000 or fewer lines are selected.
3) Exporting large tables to Excel is a horrible resource hog even if you can get the table to render in QV. That's something to check the logs for as well, assuming you have audit-level logs.
Thank your for your advice. On Friday I deleted all SharedFiles and deactivated the option that enables users to create their own objects.
No performance issues since then. I keep my fingers crossed, that everything will work fine now.
Nevertheless it's a pity, that the tool is not able to track the performance consumption of specific tasks better.