Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
I need help to understand the reason behind below error, how to avoid it and if it cannot be avoided then what can be done to resume task automatically.
Source and Target details of my task are as follows -
Source: Microsoft SQL Server
Target: (Snowflake on AWS)
Error Code: 1020440 Error details: Could not read from backup. The backup file is either inaccessible or does not exist.
Dear @gkaur ,
If you found our response useful, we kindly ask you to click on “Accept as Solution”. This will aid other users in finding the answer more effortlessly.
Thank you! 😊
Regards,
Deepak
Hello @gkaur ,
From the error message:
Error Code: 1020440 Error details: Could not read from backup. The backup file is either inaccessible or does not exist.
It indicates the backup file was deleted before Replicate could read it or its not accessible by Qlik Replicate. To solve the problem you should either talk to your DBA to make the backup Tlog accesible for Replicate or If the backup log are not available any more then you should reload the task. and if needed for the future consider increasing the Online log retention period.
BTW, you can set SOURCE_CAPTURE to Verbose then check Qlik Replicate task log file to know the SQL Server backup log file name.
Hope this helps.
John.
One can only 'resume' a task after a missing backup error restoring by making the backup available.
Does the error message not include the backup file name as well?
Does it retry? What (internal) parameters are in use - for example is there direct backup file access or is alternateBackupWaitForFile set?
If you don't have a filename you can 'zoom' in using the logfile lines like:
[SOURCE_READER ]I: Throughput monitor: Last DB time scanned: 2014-06-03T12:02:23.000. Last LSN scanned: 00195fd2:000bd477:01be. #scanned events: 4353858.
Those LSN's can be mapped to files querying the backupmediafamily and backupset system tables. For example:
select
bs.backup_start_date, bs.first_lsn,
[dbo].[fn_NumericLsnToHexa](bs.first_lsn) "first_lsn - hex",
[dbo].[fn_NumericLsnToHexa](bs.last_lsn) "last_lsn - hex",
bs.backup_size/(1024*1024) "MB",
bmf.physical_device_name
from msdb.dbo.backupmediafamily bmf, msdb.dbo.backupset bs
where bmf.media_set_id = bs.media_set_id
and bs.database_name=db_name() and bs.type='L'
and bs.backup_start_date > getdate() - 2
order by bs.backup_start_date
d Luck!
Thanks, John, for your quick response.
Response from my DBA -
"Backups are configured in third party tool Rubrik and stored to cloud and backups gets deleted when retention period expires. Also, LSN is changed post every log backup or checkpoint and if Qlik is reading data from online logs and in between Tlog backup happens then LSN will be changed and Qlik will get errors as LSN chain breaks."
Now, based on this response what are our options?
We cannot simply reload tables as it will affect dependent applications as reload first truncates the existing data.
Hello @gkaur ,
Thanks for the update.
Qlik Replicate is log-based product. In this scenario, Replicate captures changes from TLOG and applies changes to target side DB, Snowflake on AWS. If a TLOG file is skipped/lost, then some source databases changes are missing which leads data out of synchronization between the source and target sides.
In Qlik Replicate there are 2 Change Processing mode:
If the task is set to Apply Changes, the missing changes may lead Replicate report warning and/or errors eg No record found for applying a DELETE; Duplicate key when applying an INSERT; No record found for applying an UPDATE etc, unless we change the default Error Handling Policy. However even the errors/warnings are repressed but the data is not synchronized.
If the task is set to Store Changes, no warnings/errors pops up within Qlik Replicate however the downstream Apps (which consume the changes data) may report exceptions because of the missing data.
In summary, the missing TLOG files should be:
1- Restore to MS SQL Server native format and are accessible to Qlik Replicate; or
2- Reload the task
Regarding your doubt:
We cannot simply reload tables as it will affect dependent applications as reload first truncates the existing data.
The setting can be changed to NOT Truncates the existing data, see Target table preparation , choose one to fit your needs.
Hope this helps.
John.
Hi @gkaur ,
Here are a few potential solutions for the query you have posted.
1 Memory Issues: If your server is nearly maxing out its RAM, a spike in the volume of changes in the task could cause the task to fail1.
2 Change Processing Mode: If your Replicate has access to the backup logs, use the “Prioritize Online logs” option in Change Processing Mode. This way, if the LSN is not found in the .ldf file, Replicate tries to read it from the .trn backup file2.
3 Contact Support: If the problem persists, it might be best to open a support ticket with Qlik. When you do, set SOURCE_CAPTURE to Verbose, reproduce the behavior, and attach the Diagnostics Packages to the ticket3.
Please note that these are general suggestions and the exact solution might vary depending on your specific setup and environment. Always ensure to follow your organization’s guidelines when making changes to your system.
Good luck!
Thanks,
Deepak
Hello team,
If our response has been helpful, please consider clicking "Accept as Solution". This will assist other users in easily finding the answer.
Regards,
Sushil Kumar
Dear @gkaur ,
If you found our response useful, we kindly ask you to click on “Accept as Solution”. This will aid other users in finding the answer more effortlessly.
Thank you! 😊
Regards,
Deepak