Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi guys,
My client has recently purchased a new QlikView server,
They've moved from a:
Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30Ghz with 16.0 RAM and 8 CPU
to:
Intel(R) Xeon(R) CPU E5620 @ 2.40GHZ with 64GB RAM and 16CPU
No publisher is used, QlikView 11.2 SR12 installed with QlikView Server.
The main deployed model on the old server took on estimate 23 minutes to run.
With the new server in play i figured the reload time would be substantially faster but this is not the case, the new server takes 40 minutes to reload the same model.
The QlikView settings are a replica of each other.
Can anyone shed some light on why the new server is taking longer to reload than the old one?
Thanks.
old: @ 3.30Ghz
new: @ 2.40GHZ
Apparently the reload benefits more from a higher CPU frequency than from more CPU's. This makes perfect sense to me. If you have a large number of reload task executing simultaneously you could see an overall improvement on the new server.
In addition to what Gysbert says, reloading is often I/O bound more than CPU bound. Perhaps the network connection of the new server is performing poorly for some reason. Is it using the same antivirus software with the same settings? Are there other tasks that the new server is performing? Have you compared the CPU utilisation between the 2 servers?
Hi gents,
Thanks for the prompt responses, duly appreciated. I've forwarded on your recommendations to our IT, they'll analyze and revert.
Apart from the CPU ghz, the servers applications, software and tasks are a replica of each other.
Also my testing thus far with the results given pertain to QlikView running and computing only the already generated qvds, in other words QlikView is reloading the final model. So i don't think network could be the issue.
Hope this makes sense. Maybe getting a better processor thanthe previous 3.4ghz together with the increased number of CPU and increased RAM would make the server perform better than the old one, am i correct in saying this?
To increase performance, you need to determine the limiting factor. For example, adding CPU when the process is RAM bound will not help. Is the process RAM bounded, network I/O bounded, disk I/O bounded or RAM bounded? Look at the Windows performance counters to determine where to invest more money.
You could also analyse the application(s) itself. Is it loading a very large amount of data? 40 minutes is a long time for a reload, but to be expected if you are doing complex transformations on multi-million row tables, or possibly doing simpler loads of 100 million row tables. It may be possible to optimise the reload to improve the performance.
Hi Jonathan,
Thanks again for your insights and recommendations.
The model on the new server that takes 40 minutes, takes about 22 minutes on the old server.
The models are the exact same, thus increasing server performance i would like to bring the reload time to about 15 minutes or less.
The model has very complex transformations and is optimized as far as I can thus far.
I'll ask IT to check the I/O bounds as well.
Hi Jonathan,
Thanks again for your insights and recommendations.
The model on the new server that takes 40 minutes, takes about 22 minutes on the old server.
The models are the exact same, thus increasing server performance i would like to bring the reload time to about 15 minutes or less.
The model has very complex transformations and is optimized as far as I can thus far.
I'll ask IT to check the I/O bounds as well.