Skip to main content
Announcements
Join us at Qlik Connect for 3 magical days of learning, networking,and inspiration! REGISTER TODAY and save!
cancel
Showing results for 
Search instead for 
Did you mean: 
kausar007
Contributor II
Contributor II

Is the central node in multi node deployment a single point of failure?

Hi Guys,

I am planning a deployment of highly available Qlik Sense server. Reading through documentation I found out that central node is required for all kind of deployments and also "Rim nodes synchronize data through the central node.". How is not the central node then single point of failure? What happens if the central node goes down?

Kind Regards,

Kausar

1 Solution
6 Replies
jaisoni_trp
Creator II
Creator II

You can make another node as fail over candidate and swap Central node in Share Persistence. I would suggest to move to Share architecture instead of Synchronized.

kausar007
Contributor II
Contributor II
Author

Hi Jai,

Thanks for the suggestions. Do you have any tutorial/guide links for how to install Qlik sense components as shared architecture with high availability?

Regards,

Kausar

jaisoni_trp
Creator II
Creator II

Qlik is doing a good job documenting everything on help site. I would start with that.

http://help.qlik.com/en-US/sense/September2017/Subsystems/PlanningQlikSenseDeployments/Content/Deplo...

kausar007
Contributor II
Contributor II
Author

Hi Jai,

Yes I was reading through this documentation that means I am on the right track and I will be using the shared persistence option and not the synchronized. As an example let's say I am creating something like this:

dr_PortsSepProxyEngineHA_613x535.png

I have a central node and two proxy/engine nodes But still my question is what happens if the central node goes down?

Thanks,

Kausar

kausar007
Contributor II
Contributor II
Author

Hi Jai,

This is great. Thank you for the link. I was looking at the version 3.2 documentation as I had 3.2 installed. Looks like failover was introduced later. I will be looking to upgrade now.


Regards,

Kausar