Has anyone been able to achieve acceptable performance running production QVS in AWS?
-Rob
Very interesting question Rob. We are still hosting QV internally on physicals (DL580's + SSD). For a QS POC using geo analytics I'm using internally hosted VM's, and performance has been ok, but the data volumes and user base is minuscule compared to volumes on our QV deployment (i.e. 10's vs 1000's of users). Will be very interested to see responses on this thread, as where practical, we're looking to move in that direction (although I think it will be difficult for larger deployments unless the majority of your data is hosted or at least cached in the same location as the QV servers). Would have used QS cloud for this latest POC, but it doesn't support geo analytics plug ins, so I had to use internal VM's + QS Enterprise.
Hi,
Yes, very interesting thread. I will agree with Graeme that difference in performance is significantly lower for QS setup in AWS comparing to what we had with (internally hosted) VM QV environment. To compare - 2 node VM env with each 4xCPU, 48GB ram supporting 500 users in QlikView (and intraday job reloads) without bigger issues VS 32cpu 244GB ram AWS setup for testing only for 10 users (no admin jobs running on this host) and user experience isn't the greatest (up to 30 sec to render the charts (simple calculations, no set analysis), file size 5GB).
To be honest - that's the first time I'm working with a single node QS setup on AWS so can't really confirm if this is more likely related to AWS performance, QS performance, data volumes or our code. I do know that for the PoC we were still using locally hosted VM and we had much less concerns about the performance during the development.
Would love to know what the rest of you thinks,
Micha
Hi Michalina,
It is difficult to have a like for like performance between your on-premises infrastructure VS AWS, especially for Qlik Sense. This entirely depends on how you setup your AWS environment.
AWS utilise CPU credits across all of their EC2 instances, once that credit was out, you will not get the same performance of what was advertised. Especially with Qlik which is a CPU intensive software, the credit will burst out before you realised it.
In order to optimise AWS with Qlik. You can turn Qlik node into an immutable architecture. With Qlik Sense share persistence node, this is possible by retaining configuration in an AWS RDS and the file share on a Linux box or any file share that is applicable. Lastly, the central node can be "switch off" or you can choose to provision a new instance and have a set of brand new burst credits.
Hi!
Thanks very much for your advice - sounds like this is something we should definitely discuss with our aws engineers. Good luck to you all!
Micha
Hi Rob,
We're actually achieving great performance on a r4.large instances. These are the new DDR4 products for memory optimised work. High performance SSDs are the way to go for EBS drives, these really speed up load times, as well as loading of the access point
Piers
@piersbatch
Hi Neo,
I think it is only the T# instances that use CPU credits
Piers
@piersbatch