
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Latency to high after restarting task
Reasuming a task just after stoping it is causing the latency to increase up to 30 minutes aproximately. This is happening after we moved the target to azure cloud. Is there a way to tune this?
Thank You
- « Previous Replies
-
- 1
- 2
- Next Replies »
Accepted Solutions

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Your UI snapshots suggest that after a resume any fresh incoming changes found are quickly read and pushed to the apply stream where they sit in memory for a while before migrating to and on-disk storage due to failing to apply in time.
I suspect the system is failing to co0nnect to the target DB. This is a SOFT failure in Replicate. It will just keep on trying. Check the logs for the lines immediately following '[TARGET_APPLY ]I: Going to connect to server ' Eventually it should be followed by '[TASK_MANAGER ]I: All stream components were initialized '
You probably want to and to stop, switch logging to TARGET_APPLY TRACE an resume again. Hopefully any connection failure has enough details. f not, do it again with logging to VERBOSE.
Sometimes initial connections, due to DNS issues perhaps, seem to go around the world a couple of times, or to the moon and back before finding the target. Once connected all will might fine.
If the connections do succeed in reasonable time, then look at the 'apply' events and the log entries for that.
Also, The tasks is not Re-loading tables as part of the resume is it now? Perhaps it is convinced the target table structure changed?
Hein.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @Heinvandenheuvel , I have set the log to verbose. I see the task is creating all the metadata again for every table and , apparently, this is why the latency increases. I guess this the expected behaviour, and that is why every restart takes about the same amount of time.
Thank You

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @Jperezmatus
Performance tuning is dependent on many factors and there is no one-size-fits-all setting. This would be a tuning exercise where we would need to look at various items such as: the volume of changes, the speed of the target database, latency requirements among other variables. This would allow us to make an educated guess to start a trial and error process.
If you would like to do the tuning on your side, you can refer to our user guide. There is a section called "Change Processing Tuning". From there it would be a trial and error exercise for tuning. If you would like us to do the tuning for you we can bring our Professional Services team to do the work for you, if you're interested in that please reach out to your account rep.
Change Processing Tuning #Change Processing Tuning ‒ Qlik Replicate
Thanks,
Dana

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Thank you for the post to the Forums. As notes to Dana's update you also have to consider the Task and the connections to the Source and Target where there can be up to 5 to 6 connections for the data flow from the Source and Target. During the Task stopping it has to do clean up of the connections with Rollbacks anything related to something on the Source or Targets side during the stopping of the Task. How long was the Task down before Resume the Task? As noted we do defer to the Professional Services team to do the work for you, if you're interested in that please reach out to your account rep.
Note: The Performance set to TRACE while the Task has Latency will also help with identifying if its Source or Target Latency.
Thanks!
Bill

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you all for our answers, the latency we have is excelent for the transaction level ehat we have. We only have one source and one target. I stopped the task with no incoming changes, not in memory or disk apply transactions. The log shows that the las sate of the task is saved. After 2 minutes I restarted the task and had 230 pending transactions, which are almost nothing. But since the restart, the task starts to acummulate transactions on disk, latency is now 4 minutes and 482 pending transactions on disk. The apply throughput es very low, does not catchup with pending transactions. Latency time continues to increase (applying until target commit). Latency is now 10 minutes and 950 pending transactions to apply. IT takes almost 30 minutes before latency goes back to normal again.
Thanks

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
For the Task settings can you confirm the change processing is set to Batch optimized? Also if you have the PERFORMANCE set to TRACE during the Latency on the Task this would help to show if its a Source or Target issue. Along with the replicate\data\tasks\sorter directory on the stwp (swap) files accumulated. But as noted this is not allot of transactions and with this information hope it helps or I would reach out to your Account team to have PS engagement for the Tuning of the environment. If you see one-by-one mode in the Task you may want to report the case with Support as this could also hinder the Task and Latency.
Thanks!
Bill

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, the task has change processing set to Batch optimized. With performance set to trace , the log is sendeing messages about increasing latency on target as shown in Apply Latency monitor graph. Two swap file in sorter folder. After aprox. 30 minutes everything starts to work fine again.
00004580: 2022-11-22T16:26:16 [PERFORMANCE ]T: Source latency 0.19 seconds, Target latency 545.93 seconds, Handling latency 545.74 seconds (replicationtask.c:3734)
00004580: 2022-11-22T16:26:46 [PERFORMANCE ]T: Source latency 0.79 seconds, Target latency 576.06 seconds, Handling latency 575.27 seconds (replicationtask.c:3734)
00002120: 2022-11-22T16:27:11 [SORTER ]I: Task is running (sorter.c:704)
00004580: 2022-11-22T16:27:16 [PERFORMANCE ]T: Source latency 0.31 seconds, Target latency 606.19 seconds, Handling latency 605.88 seconds (replicationtask.c:3734)
00004580: 2022-11-22T16:27:46 [PERFORMANCE ]T: Source latency 0.96 seconds, Target latency 634.44 seconds, Handling latency 633.48 seconds (replicationtask.c:3734)
00004580: 2022-11-22T16:28:16 [PERFORMANCE ]T: Source latency 0.46 seconds, Target latency 664.52 seconds, Handling latency 664.06 seconds (replicationtask.c:3734)
00004580: 2022-11-22T16:28:46 [PERFORMANCE ]T: Source latency 0.81 seconds, Target latency 694.55 seconds, Handling latency 693.73 seconds (replicationtask.c:3734)
00004580: 2022-11-22T16:29:16 [PERFORMANCE ]T: Source latency 0.31 seconds, Target latency 724.63 seconds, Handling latency 724.32 seconds (replicationtask.c:3734)
00004580: 2022-11-22T16:29:47 [PERFORMANCE ]T: Source latency 0.84 seconds, Target latency 754.44 seconds, Handling latency 753.60 seconds (replicationtask.c:3734)
00004580: 2022-11-22T16:30:17 [PERFORMANCE ]T: Source latency 0.36 seconds, Target latency 784.50 seconds, Handling latency 784.14 seconds (replicationtask.c:3734)
00004580: 2022-11-22T16:30:47 [PERFORMANCE ]T: Source latency 0.88 seconds, Target latency 814.60 seconds, Handling latency 813.73 seconds (replicationtask.c:3734)
00004580: 2022-11-22T16:31:17 [PERFORMANCE ]T: Source latency 0.48 seconds, Target latency 844.47 seconds, Handling latency 843.98 seconds (replicationtask.c:3734)
Suddenly after 40 min aprox the task applies everyhing pending, and start behaving normally. This happens every time I stop the task and then resume the task, regardless the number of pending transactions to apply.
Thank You

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
For this Task can you also check the Change Processing and the Tuning section to see how often you send batches to the Target for the offload tuning section and write the data to the Target? These are the other things to check and to help with the environment you may want to reach out to your Qlik Account team for PS engagement.
Bill
.png)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hey @Jperezmatus ,
What is the target endpoint type? You say Azure Cloud, but we have several options this could fall under.
Thanks,
Kelly

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Please refer below link for latency and performance issues.
Troubleshooting Qlik Replicate Latency and Perform... - Qlik Community - 1929456
Thanks
Naren

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Kelly, I am sorry for the omision. The target is Microsoft SQL Server, we moved this microsoft sql server to azure.
Thank You

- « Previous Replies
-
- 1
- 2
- Next Replies »