Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello everyone,
I've configured a Log Stream task to read from an Oracle database, along with a replication task that reads from this Log Stream and replicates data to Snowflake.
Currently, the replication task targeting Snowflake is experiencing very high latency, while the Log Stream task shows zero latency.
Moreover:
What could be the cause of this behaviour? Are there recommended solutions for this kind of latency pattern?
Thank you in advance for your assistance.
Hi @raffaelec ,
Regarding performance, I believe that the 'loading method,' 'max file size (MB),' and 'number of files to load per batch' are relevant factors.
Generally, if a performance issue occurs, sorter files should be generated. Please reproduce the problem with verbose logging enabled on source_load, source_capture, sorter, target_load, and target_apply. Additionally, please create a support ticket with the diagnostic package and the verbose task log.
Regards,
Desmond
Hi @raffaelec ,
Did this issue occur after a maintenance job? Please enable TRACE logging on TARGET_APPLY to verify that the task is running and to check the performance. I suspect it may be related to a large transaction with a single commit. Please also check if many sorter files were generated.
Without investigating the task log, it is difficult to determine the cause. I recommend creating a support ticket.
Regards,
Desmond
Hello @DesmondWOO,
Thank you for your answer.
I enabled TRACE logging on TARGET_APPLY. The log file only recorded several “task is running” messages afterward.
I also checked the sorter folder for the Snowflake task on the Qlik Replicate server, but it was empty.
Moreover, I tried stopping and resuming the Snowflake task. The task became stuck in the “waiting for open transactions to be committed” state. After 3 hours, I tried reloading the Snowflake task, but even in this case the task was stuck in the “waiting for open transactions to be committed” state. It seems the only way to solve this issue is to reload the Log Stream task and then the Snowflake task.
Is there any internal parameter or task configuration that can be used to improve the handling of large transactions within the Log Stream and Snowflake tasks during CDC?
Thank you in advance.
Best regards
Hi @raffaelec ,
Regarding performance, I believe that the 'loading method,' 'max file size (MB),' and 'number of files to load per batch' are relevant factors.
Generally, if a performance issue occurs, sorter files should be generated. Please reproduce the problem with verbose logging enabled on source_load, source_capture, sorter, target_load, and target_apply. Additionally, please create a support ticket with the diagnostic package and the verbose task log.
Regards,
Desmond