Skip to main content
Announcements
July 15, NEW Customer Portal: Initial launch will improve how you submit Support Cases. IMPORTANT DETAILS
cancel
Showing results for 
Search instead for 
Did you mean: 
MdFazil
Partner - Contributor III
Partner - Contributor III

how to avoid data loss, if it happens when what to do

Hi guys, a newbie here. I'm just practicing replication and things. Now my question is What are the ways to prevent "data loss" in Qlik replicate. In case, if the data loss happens, what are the options to resolve it. Can anyone share me any documentation for this here. Thanks in advance


Regards
Fazil M

Labels (3)
1 Solution

Accepted Solutions
Heinvandenheuvel
Specialist III
Specialist III

Don't worry, be happy.

There are no know reasons for data loss due to Replicate itself. If there ever were, those would have been bugs and supposedly addressed. 

But let's say your Oracle source tasks is stopped for a day and starts back up reading from Archived Logs but the oldest are already backed up further and deleted.

The task will refuse to start until those are put back. Now you could start with timestamp corresponding with archived logs which are still there and 'tables already loaded' and you would have chosen to create a data loss. Sometimes, but rarely, that is best to get something on target. You could then at a more  convenient time decide to reload to correct the loss. Often it is best to reload right away because applying a day worth of changes may be more work and take longer than just cleanlyvreloading.

In some specific data loss cases you can fix it up by applying benign update to the source perhaps for a certain pk range or change date. Just set a column to itself and let that replicate through. You would need to change task error handling to "insert when not found' to handle inserts.

Best not to worry about things you cannot predict other the a documented general reload playbook.

Hein.

View solution in original post

7 Replies
Heinvandenheuvel
Specialist III
Specialist III

Don't worry, be happy.

There are no know reasons for data loss due to Replicate itself. If there ever were, those would have been bugs and supposedly addressed. 

But let's say your Oracle source tasks is stopped for a day and starts back up reading from Archived Logs but the oldest are already backed up further and deleted.

The task will refuse to start until those are put back. Now you could start with timestamp corresponding with archived logs which are still there and 'tables already loaded' and you would have chosen to create a data loss. Sometimes, but rarely, that is best to get something on target. You could then at a more  convenient time decide to reload to correct the loss. Often it is best to reload right away because applying a day worth of changes may be more work and take longer than just cleanlyvreloading.

In some specific data loss cases you can fix it up by applying benign update to the source perhaps for a certain pk range or change date. Just set a column to itself and let that replicate through. You would need to change task error handling to "insert when not found' to handle inserts.

Best not to worry about things you cannot predict other the a documented general reload playbook.

Hein.

Kent_Feng
Support
Support

Hi @MdFazil 

Data loss is very general question and can be due to many reasons. We don't have a specific 'data loss' chapter in the user's guide or we have technical documentation specific for this topic (at least for this moment).

To avoid data loss, the first step is make sure you follow the user's guide to install the correct ODBC driver or client software for your source endpoints and target endpoints. For many endpoints, a specific version of ODBC is required, you can't use a older one or a newer one. And this requirement may be changed when Replicate version is updated. For example, for MS SQL Server endpoint, ODBC 17.6 is required for Replicate 2022.05; ODBC 18.1 is required for Replicate 2022.11 to Replicate 2023.11.

Secondly, you need to assign required permissions to the user specified in endpoints. Insufficient permissions may stop Replicate reading source tables and data loss would be caused.

Thirdly, make sure you have a healthy and stable connections to source endpoint and target endpoint, and make sure the bandwidth and computing power is sufficient to handle the data changes.

Hope the above will give you some hints.

Regards

Kent

*** Greetings from Down Under ***
MdFazil
Partner - Contributor III
Partner - Contributor III
Author

Hello @Heinvandenheuvel , Thank you for replying,

Thanks for clarifying things here. and this question was asked out of curiosity and thanks for the calrification.

 

 

Regards
Fazil M

 

MdFazil
Partner - Contributor III
Partner - Contributor III
Author

Hi @Kent_Feng , Thank you for replying


This is an another side how we can see the problem occurs and thanks for letting me know this.

Regards
Fazil M

john_wang
Support
Support

Thanks for the feedback, glad to hear that! @MdFazil 

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
Dana_Baldwin
Support
Support

Hi @MdFazil We have published the general troubleshooting steps for identifying the root cause of data issues here: Troubleshooting Missing Data and Collecting inform... - Qlik Community - 1713268

MdFazil
Partner - Contributor III
Partner - Contributor III
Author

Hello @Dana_Baldwin ,

Thanks for sharing this, this document will be highly helpful for me

Regards
Fazil M