Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi,
I have a Central Node Server running alone with 512GB of Ram for running Qlik and currently its getting used 95-98% most of the time, Data has significantly increased over time, and it may be consuming due to Extraction and Transformations mostly, I was thinking of distributing these jobs over one server with 128GB of Ram on Development which I wasn't using due to huge data and Ram requirement earlier.
I would like to know how can I proceed with it, how can I schedule Transformations or Model to Load in my Main Server to run after one another without break, without knowing the time it will take to reload as time is supposed to increase if data increases or any other problem.
I can save and access files on either server. I am working on Qlik Enterprise On Prem
Well I will suggest create multi-node architecture. add RIM node & apply load balance rule in it.
Regards,
Prashant Sangle
If the issue is specifically RAM, keep in mind Qlik Sense is meant to use it up. You can control the amount taken up (min and max) from QMC.
As a best practice, my preference is to have a separate server (node) for reloads, not so much for RAM since it's still the same total amount used and there's actually a bit more usage owing to overhead, but to control any CPU fluctuations that can happen during large / complex load scripts.
Won't adding another server with less RAM to my Central Node create performance issues, many wont be able to open any app or sheet.
I have set it as 90-98, still its not sufficient for it.
If you have it set at 90 and 98, Qlik will always take up at least 90% of your available RAM. That doesn't mean it's actively using it, but it will show as used in Task Manager and similar apps. It would be a better idea to lower the minimum number and see how much is actually being used.
Having a reload-only node (with less RAM) means you're shunting the resources needed for reloads to another server, leaving the central server strictly as a repository and user-facing server. That means your users will not be impacted by reloads taking up a lot of RAM, CPU, or I/O.
Beside the considerations to distribute the environment to n nodes and/or adding more resources you should also do a monitoring of the current consumption - respectively which ETL tasks and which reports have how much consumption and when did it happens. Quite often are optimizations simpler and more powerful as bigger changes to the environment.