Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
April 13–15 - Dare to Unleash a New Professional You at Qlik Connect 2026: Register Now!
cancel
Showing results for 
Search instead for 
Did you mean: 
simonB2020
Creator
Creator

Latency

I have 2 Tasks.

1st is using SAP Application (DB) as a source, and targeting logstream
2nd is sourcing the logstream and targeting AWS Redshift.

Just checking the  monitors now, and latency is showing at over an hour.

Can anyone tell me 'why' ?
Or at least where I start looking to find out why ?

It surely should not take an hour to replicate a change from source to target, so assuming that I am either reading these stats incorrectly, or that I have configuration poorly set somewhere.

Thanks for any guidance.

Logstream TargetLogstream TargetRedshift TargetRedshift Target

 

Labels (1)
2 Solutions

Accepted Solutions
Alan_Wang
Support
Support

Hi Simon,

When you click on the Apply Latency icon, there will be source and target latency graphs. Can you check to see if the first task (log stream parent/staging) has source, target, or latency from both?

Source and target latency lines, match up if it's just source latency. Target lines would be greater than the source if there is target latency.

 

 

If the issue is solved please mark the answer with Accept as Solution.

View solution in original post

KellyHobson
Former Employee
Former Employee

Hey @simonB2020 

In addition to Alan's comment about drilling into the Apply Latency to narrow down type, here is an article to help troubleshoot latency:

https://community.qlik.com/t5/Knowledge/Troubleshooting-Qlik-Replicate-Latency-and-Performance-Issue...

Thanks,

Kelly

View solution in original post

5 Replies
Alan_Wang
Support
Support

Hi Simon,

When you click on the Apply Latency icon, there will be source and target latency graphs. Can you check to see if the first task (log stream parent/staging) has source, target, or latency from both?

Source and target latency lines, match up if it's just source latency. Target lines would be greater than the source if there is target latency.

 

 

If the issue is solved please mark the answer with Accept as Solution.
KellyHobson
Former Employee
Former Employee

Hey @simonB2020 

In addition to Alan's comment about drilling into the Apply Latency to narrow down type, here is an article to help troubleshoot latency:

https://community.qlik.com/t5/Knowledge/Troubleshooting-Qlik-Replicate-Latency-and-Performance-Issue...

Thanks,

Kelly

simonB2020
Creator
Creator
Author

Thanks Kelly, I'll have a read through and investigate !

SwathiPulagam
Support
Support

Hi @simonB2020 ,

 

Seems like you have target latency only.
Start troubleshooting by Increasing the TARGET_APPLY to trace to monitor how frequently your batches are applying and how long each batch is taking. If you can't find more information on the log file and need help then please create a support case.

 

Thanks,

Swathi

Heinvandenheuvel
Specialist III
Specialist III

I don't think it is TARGET_APPLY based latency, but rather source change reading.

The latency 1H22M04 shown by  the "Redshift Target" is the 'sum' of the logstream latency feeding it and the process itself. That logstream reading is fast and 'pre-sorted' so you are unlikely to see for example 'waiting for Tx to commit. That task seems to have almost 3 minutes  apply latency depending on your observation/screenshot skills. A couple of minutes is not great, we all want seconds, but it could well be in the realm intended configuration settings balancing latency against resource consumption (for example: transmit many small files often is more expensive than transmitted fewer, larger files less frequently)

So with that let's now assume your main worry is the 1h19m13 "Logstream Target" task latency. Well, we see all outstanding TARGET_APPLY  transactions to be in memory. Normally (default settings) Transactions longer than a minute will move to disk, which is indicated as zero. Therefor I conclude there is no significant target latency in that task, which is normal for a logstream: it's just  write to a file with some added details (sequence numbers, time stamps, ... )

If this is all correct, and I'm happy to be pointed out mistakes in my reasoning, than that leaves us source change reading latency. Unfortunately you only indicate SAP, not the underlying DB.

I recommend you study that area first. As alwasy be sure to have PERFORMANCE set to trace, and try SOURCE_CAPTURE trace  for a few minutes watching the reptask log file size. You may be able to go longer, or even go verbose but typically that generates to much stuff to look at too fast.

If more help is needed, then I recommend more details to be provided on the main (SAP) source configuration and may some comment or json snippets with core performance choices.  Which SAP endpoint flavor? Versions?  "RFC call batch:"?  CDC Mode: trigger-based vs CDCLog-based CDC? Redo/Backup settings? Retention times? Please avoid screenshots in favor of raw (text) data.

In the end, you may well need to open a support case for this, but for that you need to verify the above observations and gather the requested details anyway.

Hope this helps some,

Hein.