Skip to main content
Announcements
Qlik Connect 2024! Seize endless possibilities! LEARN MORE
cancel
Showing results for 
Search instead for 
Did you mean: 
kdmarkee
Specialist
Specialist

QV 12.20 - Looking for folks to share their CPU and RAM set up

I'm hoping I can find some QlikView users out there who are willing to share their CPU and RAM specs.  If this thread is successful, I think it could help a lot of people.

We are running QV 12.20 on a VM and our machine randomly crashes so we are not doing something right.  Our QV scenario is:  8 processors, 48GB RAM, 20 concurrent users, 20 qvws, and our largest and most used app is 1GB in size and has 20+ tabs each with straight charts primarily and most charts have set analysis in the expressions.  (We also use NP Publish Tasks and  On Demand as well which is going to use resources too.)

If interested in offering up your experience, I'd be looking for the following info:

  • QV version
  • physical or virtual machine
  • clustered or not
  • CPUs/processors
  • RAM
  • number of concurrent users
  • number of qvws
  • size of your largest qvw and if it has a lot of tabs, lots of charts and filters, and lots of set analysis

Thank you in advance to anyone willing to share.

2 Replies
Miguel_Angel_Baeyens

I have no problem in sharing something about our environments, but it will probably tell nothing since we are hosting different applications, we have different concurrency, different needs, different complexity in the expressions and so on and so forth.

One of the things I like the most of QlikView (or Qlik Sense) is that is not a black box and you can set up your environment as you see fit to meet your demands, and scale as required.

Replying to your questions, one of our PROD environments looks like:

  • QlikView 12.10.20700 SR9
  • AWS instances, so virtual
  • QVS Clustered, QDS multiple, not clustered
  • CPUs varying largely depending on the service, for the QVS, 32 vCPUs
  • 256 GB RAM
  • Peaks of 700, average 200 concurrent
  • 440 QVWs (PROD) of very different complexity, use case, layout and reloading time
  • Largest QVW is 8GB in RAM with over 5k expressions and over 1k objects. Very complex expressions (several dozen lines some of them)

This tells you nothing valuable, as your largest QVW and mine can be as close as an orange to a buffalo, but some things worth mentioning are:

  • Clustered vs unclustered has little to no impact, and it depends on your users and apps: do you need your apps to be distributed across several servers, in case one fails the load can be balanced or is it not that important?
  • QlikView version: also not significant unless comparing early 11.20 vs 12.x, (and except for QlikView 12.20 IR to 12.20 SR4). Although the QIX engine varies from 11.20 to 12.x, the biggest resource constraints you can find are on the distribution service, not on the server service (there is also an impact, but in general, not as big).
  • Virtual vs physical: since QlikView 11.20 SR12, the differences are very small, if the environment is properly set up, or said in different words, having a VM will not make your environment slower if it's set up correctly and has the appropriate resources dedicated (yeah, virtual always entails a bit of overhead, in my experience negligible, Meltdown and Spectre aside). Bare metal can indeed be slower than a virtualized environment.
  • I'm missing storage, which is crucial (and a lot of good consultants I know simply disregard). .Shared and .PGO (and to a lesser extent .Meta) files can be accessed as often as with every click the user does, if the disk is not fast enough, not only reloads or opens or saves of big files can take longer, also access time from users can be delayed. The difference between a mid latency NAS or a local SSD can be massive in loading times, saving times and opening times, in favor of the latter.
  • I'm missing DSC/security setup: having to resolve a user against a complex, multi domain directory where users can belong to groups in each of the different domains is much more time consuming (although not that much resource consuming) than a header or a cookie.
  • I'm missing web server setup in particular (IIS vs QVWS) and network setup in general: if a user request has to travel across 25 network devices placed in 10 countries, his or her experience will be slower than a direct connection to an in-house, broadband connected datacenter. Or if the load balancer is undersized or sticky sessions not supported.
  • I'm missing Windows OS version: for example, Windows 2012 and higher provides SMB Direct, improving significantly performance for big files and file traffic (e.g.: distribution).
  • I'm missing service distribution: if QVS and QDS are installed in the same server, they will be competing for resources during reloads, when QVS can be heavily impacted, because QDS in 12.x uses significant more CPU and RAM and it has no affinity settings (i.e.: you cannot fix a number of CPUs and only those CPUs to be working with the QDS).
  • I'm missing other software running in the background of the QlikView servers, specifically antivirus and backup. Big files, as well as Shared and PGO (and Meta to a lesser extent) will take as much as twice the time to open and save as this software must keep open the files while copying, syncing or analyzing. Opposite, having the service "Computer Browser" running will help with the resolution of other servers in the network or domain group membership.
  • I'm assuming all clients are Ajax (i.e.: not Plugin or Desktop).

I strongly suggest you get familiar with the Scalability Tools‌ (but always check for the most updated version available), start measuring each individual app, or at least those who you feel slower, simulating different levels of load and concurrency and extrapolate from there, to see what you can expect if you do nothing, if you increase memory or if users increase.

Going to your issue, 1GB of RAM for one app, which is used by several users, 48 GB RAM can be just short. Using Qlik's own rule of thumb, if each of those 20 users use a 10% per session, that's 1GB initial + 20 * 100 MB = 3 GB RAM without any click performed. This 10% is, in my experience, rarely the case, most often a considerably lower value like 1%, but still worth checking. The Scalability Tools can help with that.

As important is the quantity as it is the speed. More memory but slow will not benefit more that it will harm, and same applies to CPUs: having more, slower CPUs might not help, depending were your problem resides. This could mean nothing if all charts have one field as dimension and one straight aggregation expression. Set analysis, when used properly, will help more than it will harm.

EDIT: Added service distribution and security/antivirus software. Added the 10% per user rule.

kdmarkee
Specialist
Specialist
Author

Thank you very much for taking the time to respond, it is very appreciated.