Skip to main content
Announcements
Introducing a new Enhanced File Management feature in Qlik Cloud! GET THE DETAILS!
cancel
Showing results for 
Search instead for 
Did you mean: 
NewbieQlik
Contributor III
Contributor III

Compose for Data Warehouse HA/DR Recovery

We are using compose for data warehouse for building various DIMs and FACTs for analytical requirements. In production the compose for data warehouse is installed on an EC2 instance in east1 region. When it comes to DR scenario, the procedure that we normally follow is that, we provision a new EC2 instance in East2, then deploy the code, then create the DW and DM and generate the ETL instruction by pointing to a new schema. Once it is generated, then we will update the connection back to the original DW and DM schema and regenerate the ETL instruction again to reflect the connection change. Since we are using snowflake DB for building our DIMs and FACTs, all these metadata operations are relatively slow and this whole building and regenerating the ETL instructions normally takes around 2 to 3 hours.

Is there an easier way to handle this scenario when it comes to HA or DR situation. Do we have an option to copy the data folder from east1 to east2 (new ec2 for DR) server to avoid the rebuild and generate ETL instructions. This way we can considerably reduce the recovery time? 

Any help on this topic is greatly appreciated.

Labels (4)
1 Reply
sureshkumar
Support
Support

Hello @NewbieQlik 

Kindly refer below user guide link

Setting up Compose on a Windows HA cluster:

Setting up Compose on a Windows HA cluster | Qlik Compose Help

 

Regards,
Suresh