Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Save an extra $150 Dec 1–7 with code CYBERWEEK - stackable with early bird savings: Register
cancel
Showing results for 
Search instead for 
Did you mean: 
raffaelec
Partner - Contributor II
Partner - Contributor II

High latency in Snowflake replication task despite zero latency in Log Stream task

Hello everyone,

I've configured a Log Stream task to read from an Oracle database, along with a replication task that reads from this Log Stream and replicates data to Snowflake.

Currently, the replication task targeting Snowflake is experiencing very high latency, while the Log Stream task shows zero latency.

Moreover:

  • There are no recent errors in the logs of the Snowflake task.
  • The Snowflake replication task shows no incoming changes, even though the Log Stream continues capturing them.
  • The last change applied by the Snowflake task occurred before the latency began to rise.
  • The Change Processing tab shows that the source and target latency curves are fully overlapped: this means that the latency is coming entirely from the source. This is also confirmed by performance logs.
  • Connection tests to both source and target databases are successful. Test connections to the Log Stream endpoint are also successful.
  • The Qlik Replicate server has no space or memory issues.
  • This behavior occurred once before. At that time, stopping and resuming the Snowflake task didn't work: it remained in the “waiting for open transactions to be committed” state for several hours. I performed a full reload of the tasks to solve the problem.

What could be the cause of this behaviour? Are there recommended solutions for this kind of latency pattern?

Thank you in advance for your assistance.

1 Solution

Accepted Solutions
DesmondWOO
Support
Support

Hi @raffaelec ,

Regarding performance, I believe that the 'loading method,' 'max file size (MB),' and 'number of files to load per batch' are relevant factors.

Generally, if a performance issue occurs, sorter files should be generated. Please reproduce the problem with verbose logging enabled on source_load, source_capture, sorter, target_load, and target_apply. Additionally, please create a support ticket with the diagnostic package and the verbose task log.

Regards,
Desmond

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!

View solution in original post

3 Replies
DesmondWOO
Support
Support

Hi @raffaelec ,

Did this issue occur after a maintenance job? Please enable TRACE logging on TARGET_APPLY to verify that the task is running and to check the performance. I suspect it may be related to a large transaction with a single commit. Please also check if many sorter files were generated.

Without investigating the task log, it is difficult to determine the cause. I recommend creating a support ticket.

Regards,
Desmond

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
raffaelec
Partner - Contributor II
Partner - Contributor II
Author

Hello @DesmondWOO,

Thank you for your answer.

I enabled TRACE logging on TARGET_APPLY. The log file only recorded several “task is running” messages afterward.

I also checked the sorter folder for the Snowflake task on the Qlik Replicate server, but it was empty.

Moreover, I tried stopping and resuming the Snowflake task. The task became stuck in the “waiting for open transactions to be committed” state. After 3 hours, I tried reloading the Snowflake task, but even in this case the task was stuck in the “waiting for open transactions to be committed” state. It seems the only way to solve this issue is to reload the Log Stream task and then the Snowflake task.

Is there any internal parameter or task configuration that can be used to improve the handling of large transactions within the Log Stream and Snowflake tasks during CDC?

Thank you in advance.

Best regards

DesmondWOO
Support
Support

Hi @raffaelec ,

Regarding performance, I believe that the 'loading method,' 'max file size (MB),' and 'number of files to load per batch' are relevant factors.

Generally, if a performance issue occurs, sorter files should be generated. Please reproduce the problem with verbose logging enabled on source_load, source_capture, sorter, target_load, and target_apply. Additionally, please create a support ticket with the diagnostic package and the verbose task log.

Regards,
Desmond

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!