
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Amazon RDS for SQL Server as an Endpoint - LSN errors
Does anybody else use Amazon RDS for SQL Server as an Endpoint? We have a requirement where it's preferred to replicate using the transaction log as a source vs ms-cdc tables for performance reasons (also DDL changes get deployed to the target better with tlog as source vs ms-cdc). We have the tasks working fine but get periodic (at least once a week) errors out with the following error message example. We then have to reload the target. Are there any best practices or tips for using RDS for SQL Server as an endpoint?
Stream component 'st_0_xxx' terminated Stream component failed at subtask 0, component st_0_xxx Error executing source loop. ------------------------------ |
|||||||||||
05:54:46 UTC | |||||||||||
2023/11/30 05:54:46 INFO Content-Type:text/plain; charset=UTF-8 MIME-Version: 1.0 LSN '000a235b:000db582:0025' could not be found, neither in the transaction log nor in the backup log. ---- |
- « Previous Replies
-
- 1
- 2
- Next Replies »

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @jm32
The LSN that could not be found in the transaction logs or backup logs is listed in the error message. To prevent a reload, you can restore the backup log that contains that LSN and resume the task.
'000a235b:000db582:0025'
To prevent the LSN missing error, we recommend keeping the retention for the backups for a longer period.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Does RDS for SQL Server connect to the log backup or is it only the actual transaction log?

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @jm32
You are correct Replicate cannot access the backup TLOG. From our list of limitations in the User Guide:
-
Amazon RDS for SQL Server moves the online logs to the backup location when the maximum allocated storage size is reached. This may sometimes occur before Replicate has read the logs, resulting in missing changes on the target.
Therefore, in a system with a high volume of changes (or if you notice that some changes are not being captured), best practice is to increase the allocated storage size as described in How can I troubleshoot storage consumption in my Amazon RDS DB instance that is running SQL Server?
Here's the full section of the User Guide. You can change which version of the software you're viewing the documentation for at the top left of the screen: - Limitations and considerations | Qlik Replicate Help

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for your response! I ran this query and the results below are all the log files. The first one is the TLog that we are using for replicate. So unless I'm misunderstanding.. it looks like it's not a storage issue where the online logs are moved to backup location...
SELECT D.name AS [database_name]
, F.name AS [file_name]
, F.type_desc AS [file_type]
, CONVERT(decimal(10,2), F.size * 0.0078125) AS [size_on_disk_mb]
, CONVERT(decimal(10,2), F.max_size * 0.0078125) AS [max_size_mb]
FROM sys.master_files AS F
INNER JOIN sys.databases AS D
ON F.database_id = D.database_id;

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @jm32
Try this query using your LSN to see if it's still in the online log:
select top 10 * from ::fn_dblog('xxxxxxx:xxxxxxxx:0001, null)
Note: Please replace xxxxxxx:xxxxxxxx with a valid LSN
Borrowed from this knowledge article:
Qlik Replicate: SQL Server Active/Backup presence ... - Qlik Community - 1902692
sys.fn_dblog which reads from the active portion of the transaction log.
sys.fn_dump_dblog which reads from the transaction log backups, so you can query this function to confirm that the LSN is in the backup.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @jm32
I'm having difficulty identifying the exact query to run against fn_dump_dblog, where to pass the lsn. These links might help:
Read SQL Server Transaction Log Backups to Find When Transactions Occurred (mssqltips.com)

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello team,
If our response has been helpful, please consider clicking "Accept as Solution". This will assist other users in easily finding the answer.
Regards,
Sushil Kumar

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Is there any advice on how to configure the task and the source to avoid these errors?

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @jm32
To my knowledge the only way to ensure that an LSN is not moved from the online log before it is read by the task is to increase the max allocated storage size for the TLOG. I am not sure how "full" the log has to be before data is moved to the backup log, but you need enough to have a margin or buffer. Unfortunately I don't know how to estimate how much to increase that by, as it depends on the volume of activity on the source - you'll have to add space and monitor for this condition, and add more space if it is not enough.
Thanks,
Dana

- « Previous Replies
-
- 1
- 2
- Next Replies »