Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
I'm hoping I can find some QlikView users out there who are willing to share their CPU and RAM specs. If this thread is successful, I think it could help a lot of people.
We are running QV 12.20 on a VM and our machine randomly crashes so we are not doing something right. Our QV scenario is: 8 processors, 48GB RAM, 20 concurrent users, 20 qvws, and our largest and most used app is 1GB in size and has 20+ tabs each with straight charts primarily and most charts have set analysis in the expressions. (We also use NP Publish Tasks and On Demand as well which is going to use resources too.)
If interested in offering up your experience, I'd be looking for the following info:
Thank you in advance to anyone willing to share.
I have no problem in sharing something about our environments, but it will probably tell nothing since we are hosting different applications, we have different concurrency, different needs, different complexity in the expressions and so on and so forth.
One of the things I like the most of QlikView (or Qlik Sense) is that is not a black box and you can set up your environment as you see fit to meet your demands, and scale as required.
Replying to your questions, one of our PROD environments looks like:
This tells you nothing valuable, as your largest QVW and mine can be as close as an orange to a buffalo, but some things worth mentioning are:
I strongly suggest you get familiar with the Scalability Tools (but always check for the most updated version available), start measuring each individual app, or at least those who you feel slower, simulating different levels of load and concurrency and extrapolate from there, to see what you can expect if you do nothing, if you increase memory or if users increase.
Going to your issue, 1GB of RAM for one app, which is used by several users, 48 GB RAM can be just short. Using Qlik's own rule of thumb, if each of those 20 users use a 10% per session, that's 1GB initial + 20 * 100 MB = 3 GB RAM without any click performed. This 10% is, in my experience, rarely the case, most often a considerably lower value like 1%, but still worth checking. The Scalability Tools can help with that.
As important is the quantity as it is the speed. More memory but slow will not benefit more that it will harm, and same applies to CPUs: having more, slower CPUs might not help, depending were your problem resides. This could mean nothing if all charts have one field as dimension and one straight aggregation expression. Set analysis, when used properly, will help more than it will harm.
EDIT: Added service distribution and security/antivirus software. Added the 10% per user rule.
Thank you very much for taking the time to respond, it is very appreciated.