Skip to main content
Announcements
Introducing Qlik Answers: A plug-and-play, Generative AI powered RAG solution. READ ALL ABOUT IT!
cancel
Showing results for 
Search instead for 
Did you mean: 
MoeyE
Partner - Creator III
Partner - Creator III

Need some pointers

Hi,

I need some advice. I am working with a task that as part of requirement needs to be run in transactional apply mode. The task was tested using a script that does updates on the source database beforehand which then goes to a local target with very little apply latency (< 5 seconds). When we switch and do the same test to a cloud target, the apply latency begins to increase steadily for the entire duration that the script is run on the source. We narrowed this down to handling latency. Source latency consistenly remains very low throughout the testing so that's not an issue (< 5 seconds). We saw that the outgoing buffers were filling up so we increased it all the way to 2gb where it stopped filling up. However it still behaves the same with handling latency.

The strange behaviour is that the handling latency keeps increasing until the script is stopped which then the task handling latency drops back <5 seconds within 30 seconds of the script stopping on the source. This behaviour doesn't really make sense to me. If the target endpoint is capable of applying the changes that quickly even in transactional apply mode, it doesn't make sense why latency only increases while script is running. This would make sense if source latency was increasing but source latency is extremely low the whole time. 

I will begin to investigate the target endpoint as well to see if there is a slow write speed. Just this strange behaviour doesn't make sense and I feel it is a key to finding the issue/fixing it.

I hope my explanation is sufficient.

edit: an idea i'm currently toying with is that the script may not have a commit, so when it finishes it commits the changes. However this also doesn't align with the facts as apply latency is defined from the moment the source commits the changes and replicate notices that till the moment it is applied. Source latency remains very low and has no issues, so it doesn't seem to be an uncommitted transaction or else Replicate wouldn't even pick it up.

Regards,

Mohammed

 

Labels (1)
2 Replies
SushilKumar
Support
Support

Hello Team,

Not Sure this could be the possible. Did you observe the Same behavior when execute the statement in 1 by 1 on Source.  Would be useful to share the participating Source and target endpoints. 

Reagrds,

Sushil Kumar

Dana_Baldwin
Support
Support

Hi @MoeyE 

I agree with your assessment - the changes definitely will not be moved to the target until the source commits - but I expect the latency to show as source latency in that scenario.

We can increase stream buffers for handling latency, but that only helps if you have LOB columns.

We may need to look into this more closely to see what is happening and why. Please open a support case & attach a diagnostics package & logs showing the issue.

Thanks,

Dana