Determining maximum number of concurrent Publisher tasks
I have been struggling for a long time on what basis shall I set up the maximum number of simultaneous tasks in QMC.
I have four servers with 48 Cores and 1TB RAM in each of them, two of them are QlikView servers and two are Qlikview Publisher servers. Generally, we have 25 reload task running on the QMC which includes 5-6 distribution task.
I have also doubt about the settings in QMC which shows Max number of simultaneous QlikView engines for distribution and Max number of simultaneous QlikView engines for administration. Which one of them actually serves my purpose of setting up the task to the maximum safe limit, that should also not hinder the OS consumption. Below is my Environment architecture-
Currently, I have below number of tasks set up in each -
I am in utmost need of help to understand and set up the QMC in the most correct and efficient way.
Re: Determining maximum number of concurrent Publisher tasks
Determining how many tasks you are able to run in parallel is a tricky question, as no "one size fits all" answer is available, since this is very much dependent on your individual QlikView document and data needs.
But I hope to be able to answer some of your questions, at least.
The Max number of simultaneous QlikView Engines for Distribution, for example, is meant for the actual tasks (so, that is launching a .qvb to carry out either a reload, or a distribution). The Administration one is for when you are modifying tasks in the in QMC. See this article (kb 000039069) for details.
As for how many you should make possible to run, we have some guidelines on what to consider when you plan this. Our "How many reloads to allow in your environment" article (kb 000026361) is a good starting point for research on this. It also mentions bottlenecks on Windows side, for example the Desktop Heap Size, which could prevent additional QVB.exe to be launched. You may also be interested in knowing about our queue system (kb 000031318) to prevent overloads.
Generally it ends up boiling down to task complexity, data volume, and how well you can spread out the tasks over time dependent on how resource intensive they are individually.
I wish I could be more specific on this, but there are too many variables to take into account to make a very specific suggestion.
Sonja Bauernfeind Senior Technical Support Engineer and Knowledge Centered Support Gremlin
To help users find verified answers, please don't forget to mark a correct resolution or answer to your problem or question as correct.