Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Dear Community,
We are working on a way of achieving Zero-downtime backup.
As stated in this Article:
As described in Qlik Sense Help on Backing up and restoring a site, the key requirement for successful backup is to stop all services. This means that the environment is forced to a short period of downtime when back up is executed.
It is theoretically possible to take backup of the repository (PostgreSQL) database without stopping the service. It is not possible to ensure that app files and static content are in the same state as the database if Qlik Sense services are not stopped prior to backup.
All Qlik Sense services must be stopped to ensure that database, app files, and static content are in the same state in the backup. Important, in a multi-node environment, all services on all nodes must be stopped prior to backup.
The only way to accomplish this is to stop all Qlik Sense related services prior to backup.
In order to minimize user-impact even more, we are planning on performing such Full-backup process once a week, for example, over the weekend. This way, we are minimizing the period of service downtime in case there are users accessing from different timezones.
Of course, between each full-backup, there's a 7-days gap where any kind of disaster can take place. Worst case is, a second before this week's full-backup so that we lose almost 7 days of everything.
On a daily basis, we are planning on saving everything that can't be recovered just by querying the production/staging/whatever database, or any other data source; if there's too much new data, then it'll take longer for the QVDs to get up-to-date with the database, and that's all (i.e.: we only need time, and patience).
The daily-backups will be constituted by some Spreadsheets we use for configuration purposes, all of our Qlik Applications (with no data), Tasks, Data Connections, Users, Streams, etc.; anything that could've been created and can't be just rebuilt or reconstructed. In this regard, we are planning to cover the Spreadsheets by using a script to save and upload them to a Repository, and the Qlik Applications by automating the Qlik-Cli feature that allows us to access the API call by which we can export Qlik Applications without data.
That leaves the Tasks, Data Connections, Users, Streams, etc. All these pieces can't be exported from the QMC or an API call just as easy as Qlik Apps. We are planning on dumping only those tables from the database that have information that's relevant for each of those components. For example, for each Task we would like to save which App is triggered and what are the configured triggers (if it's an event trigger, which is the Task that the current is chained to; if it's a scheduled trigger, which is the time period).
Also, an important note is that we assume that, for the time being, the restore process for what'll be daily backed up will be manual.
Finally,
Does this approach make sense? Are we in the right direction?
Did anybody ever face a challenge like having to guarantee zero-downtime backups?
Does anybody have another approach to achieve this?
Does anybody have a summary of which tables we should query in order to have the information we need to be covered on a daily basis?
Any directions, bits of advice, suggestions will be more than welcome!
Thanks in advance!
Agu.-