Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 
braham
Creator
Creator

Performace of Qlikview on a new server

Due to some new business we expect the data volumes on our Qlikview server to increase significantly. We purchased a new dedicated Qlikview server. The key configurations are 2*6 core Xeon processors (2.5GHZ), 132 GBytes memory and lots of disk space. We have disabled multi-threading.

The performance of Qlikview on this machine is very disapointing. I tested the build of one of our models on a i5 Desktop machine and it was only marginally slower than the new server. When recalculating a graph the machine seems very sluggish. Initial test on the new machione showed it to be only slightly faster than the previous server, which had fewer cores and less memory. Tests on SQL running data extracts from the warehouse shows the new server to be nearly twice as fast as the previous one. SQL hardly uses any CPU, only memory and disk IO.

I know there are techniques for speeding up a Qlikview models, but we are comparing the same model on the two computers.

What I have noticed while running the windows resource monitor is that the CPU usage rarely increases beyond about 70%. When I ask Qlikview to refresh a graph, I click accross to the monitor and watch what is going on. Once the Qlikview model has loaded there is virtually no disk IO. The CPU peaks at about 70% and you wait and wait for the graph to refresh.

I have also monotored the building of a model. The CPU runs at about 40-60% for most of the time. When the model starts to build the sythetic indexes the CPU maxes at 8%. I assume that the 8% is due to one core in the 12 is being fully utilised. Building the sythetic indexes also takes a long time. All the testing I did was when there were no other users on the server.

I would appreciate some input from other who have dealt with this issue. Is there any way to get Qlikview to utilise more of the CPU? I am left with the impression that the models will run much faster if the CPU were better utilised. My thinking may be a bit simplistic, but I would want to see one of the components (CPU, memory, disk IO or NetworkIO) running at 100%. If the CPU is not running at 100%, you should see the disks or memory or Network fully utilised.

I look forward to some input on this matter.

Regards

Braham

8 Replies
Not applicable

**bump please**

Not applicable

Why have you disabled multi-threading? Is this recommended in the manuals?

braham
Creator
Creator
Author

Hi Ashar

I have tested with and without multi-threading and it does not make any difference. I did come across some discussion groups (I think on this forum as well as others) that suggested you disable multi-threading.

Not applicable

we tested same with and without multithreading, the results without multithreading were mixed. the first time you open the document on the server using the desktop client, the response time is very very low (ie around 28secs for some graphs); closing that document, copying it into a new document, opening the new document response time is <2secs for the same graph (same scenario). closing that, opening the old document, response time is also <2secs; to note that the RAM clears after closing the document/desktop client.

we are running 11.2SR5 (desktop client) on a windows server 2008R2 with 2x6 CPU and 128GB RAM

Colin-Albert

Can you set taskmgr to show separate graphs for each CPU, and then monitor the CPU utilisation whilst the reload is running. You should see the load being shared across all cores. Some tasks in the reload may be single threaded which will slow performance, following the utilisation against the reload progress should help identify which parts of the script to address.

If you have synthetic keys then these should be removed as synthetic keys can take a significant amount of processing to create.

Good performance is not simply down to throwing more processing power at the application, good application design is equally important.

Not applicable

Yes we monitored the CPU utilization, the load is at 70% across all CPUs, there are no synthetic keys, the fact table is huge ie 93 columns with over 100mill records, there are very complex calculations in the expressions, but the schema is a neat star schema. we have implemented an excess of 75 dashboards to 30+ clients including telecom who have mind boggling # of records/raw data. this is the first time we get this behavior.

my question is this, if the load is at 70% across all CPUs (rather than peaking) how can we trace the performance, do we disable sections of the dashboard and test? will the dashboard governance help?

Colin-Albert

Does the server have an eco mode that reduces max cpu? Check the bios settings.

Not applicable

Not sure, we will check with the client; we will also check NUMA settings

Thank you