Qlik Community

New to Qlik Sense

Discussion board where members can get started with Qlik Sense.

Announcements
Qlik® Product Spotlight: Discover what’s possible. Get more from our products.
See for yourself. Register today.
Not applicable

Qlik Sense Architecture - Shared Persistence

Hello all,

I am quite new to Qlik Sense. I would like some advise regarding the way I should setup a Multi-Node environment.

Here is what we have:

  • 4 VMs with Windows Server 2012. All VMs have 2 CPU cores, 2.2GHz
    • VM1 has 8 GB of RAM
    • VM 2, 3 & 4 have 16 GB of RAM

I have installed Qlik Sense 3.2.2 Shared Persistence on the 4 machines as follows:

  • VM1 is the Central Node + Scheduler (reload)
  • VM2 is Proxy only
  • VM 3 & VM 4 are Engines + Scheduler (reload)
  • Entry point is VM2, which load balances on VM3 & VM4
  • The Scheduler service runs on VM1, VM3 & VM4

We are expecting a couple of hundreds of users to access the application. Probably less.

I would like to hear your opinion on the following:

  • The architecture overall (scheduler, proxy, etc)
  • Hardware
    • is 8 GB of memory too less for the Central Node? Especially that it's used as reload as well. Should I request an upgrade to 16 GB just like the other nodes?
    • is 2 CPU cores too less? How CPU intensive is Qlik Sense? Is it the same case for all 4 servers?

Looking forward to reading your opinion!

Kind regards,

Mihai

1 Solution

Accepted Solutions
Luminary
Luminary

Re: Qlik Sense Architecture - Shared Persistence

I'd suggest you keep the scheduler on one node (central maybe) and push the users to the other two nodes.

As your proxy node is overspec'd, you may use it as a secondary scheduler node too.

10 Replies
Luminary
Luminary

Re: Qlik Sense Architecture - Shared Persistence

I would swap the Central & Proxy node based on your configuration. There's no need to have 16GB on a proxy node and on the other hand the central node might need it, especially as it's going to perform reloads.

What app sizes do you expect to have, and reload frequency? To me the 2 CPU cores seem far too little for 100+ users.

Not applicable

Re: Qlik Sense Architecture - Shared Persistence

Thanks Martin. I can ask that the memory amount for the 2 servers to be swapped. I will also try to get the CPUs upgraded to 4 cores on all 4 servers, do you think that's enough? Which of the services is most CPU intensive?

What do you think about the overall architecture? Would you do it differently?

Also, what do you think about the Scheduler? Is it OK on Central Node + the 2 Engine nodes?

Honestly I have no idea on the App size. In the POC installation that they used so far, the largest app size was quite small - 140 MB excluding the Operations Monitor app.

Kind regards,

Mihai

Luminary
Luminary

Re: Qlik Sense Architecture - Shared Persistence

How often do you plan to reload? Do you do any extensive data modelling loads in Qlik Sense or more * loads from sources?

Not applicable

Re: Qlik Sense Architecture - Shared Persistence

I would say more loads from sources probably not that complex...

Luminary
Luminary

Re: Qlik Sense Architecture - Shared Persistence

Reload frequency..? Hours, Days, Month?

Not applicable

Re: Qlik Sense Architecture - Shared Persistence

Reload frequency is going to be daily.

FYI, I asked for the 4 VMs to be upgraded to 16GB RAM & 4 Logical Processors. It's been authorized.

Not applicable

Re: Qlik Sense Architecture - Shared Persistence

So I now have 4 VMs with 16GB RAM each and 4 Cores. How can I change the architecture to benefit from them?

Luminary
Luminary

Re: Qlik Sense Architecture - Shared Persistence

I'd suggest you keep the scheduler on one node (central maybe) and push the users to the other two nodes.

As your proxy node is overspec'd, you may use it as a secondary scheduler node too.

Not applicable

Re: Qlik Sense Architecture - Shared Persistence

So in the end the architecture will look like below, right?

VM1 - Central Node (Repository) + Scheduler

VM2 - Proxy + Scheduler

VM3 - Engine

VM4 - Engine

Thanks again!

Mihai Hutanu