Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
CDC(delta) records are having high latency to reflect in the target of qlik replicate Log Stream tasks in Qlik Replicate 6.6 version. Target end point is s3 storage. Is there any parallel threads to be enabled in oracle or qlik setting has to be adjusted ?
Source end point :
Oracle Database 11g Enterprise Edition
Release 11.2.0.3.0 (64bit Production):
Hello Kavin,
Here's an initial Qlik article to help you narrow down possible causes for latency.
Hi Team,
Can you please review this community article regarding Latency/Performance troubleshooting:
Best,
Hordy
Hey @kavin88p ,
What is your latency threshold or limit? Based on the output we see max source latency 6.90 seconds and max handling at 18.00 seconds.
Depending on your task, i.e number of tables, how active the tables are, any transformations, ect. there can be fluctuations in the latency based on how many transactions are being processed at one time.
For performance tuning, we highly recommend engaging out Professional Services team. They can assist with tuning your environment while taking into account various architectural artifacts.
Here is the link to contact PS regarding an engagement:
https://www.qlik.com/us/services/qlik-consulting/contact-consulting
Thank you,
Kelly
The PERFORMANCE logger is great at exposing the root cause.
With 6 seconds as source latency is there some sort of stress on the source environment? Is the task using Logminer or Replicate Log Reader? How close is your Replicate server to the Source Database?
For the Target, you have Handling latency which usually means there is a bit of a "traffic jam" because of volume, or it could be bad data being rejected ( check the task log with Target_apply verbose for 5 minutes only)
This sounds like classic Performance Tuning need, and as written above, The Professional Service Team is the team that assists with this.
Hope this gives you some insight!
Sincerely,
Barb
Hello,
Is it a source or target latency? Can you check if there are any bulk apply errors on target_apply or if task is switching from bulk to one-by-one?
Thanks
Lyka
Hi @kavin88p ,
For the S3 target Batch Optimized Apply mode is not supported so it will apply all the changes in the Transactional mode.
We have to understand what kind of Latency (Source or Target Latency) if it is Target and any one of the tables is getting a lot of changes then you have to create a new task for that table alone.
Thanks,
Swathi
Hello Kavin,
Here's an initial Qlik article to help you narrow down possible causes for latency.
Hi Team,
Can you please review this community article regarding Latency/Performance troubleshooting:
Best,
Hordy
I could see below latency stats with performance logging set to trace in qlik Log stream task for the first 15 minutes . No LOB tables are there.
Line 2893: 00018198: 2022-05-26T15:07:07 [PERFORMANCE ]T: Source latency 6.90 seconds, Target latency 10.90 seconds, Handling latency 4.00 seconds (replicationtask.c:3330)
Hey @kavin88p ,
What is your latency threshold or limit? Based on the output we see max source latency 6.90 seconds and max handling at 18.00 seconds.
Depending on your task, i.e number of tables, how active the tables are, any transformations, ect. there can be fluctuations in the latency based on how many transactions are being processed at one time.
For performance tuning, we highly recommend engaging out Professional Services team. They can assist with tuning your environment while taking into account various architectural artifacts.
Here is the link to contact PS regarding an engagement:
https://www.qlik.com/us/services/qlik-consulting/contact-consulting
Thank you,
Kelly
>> Source latency 6.90 seconds, Target latency 10.90 seconds
So what's the problem? Don't worry, be happy!
Is this what the topic is created for "High Data latency for Log stream tasks", or is this best case?
Please clarify the problem statement with actuals, expectations, and requirement because really a just a few second is considered fabulous for most usage scenarios I have worked with over 10 years now.
As you clarify, give us a hint about table count, changes rates, and most importantly Oracle Redo generation volume stats - MB/sec, typical redo size, how many/hour and so on, and finally network connectivity indication (10Gb?)
You'll need this information for your own understanding and to start a consultancy assignment if that's the way to go.
hth,
Hein.
The PERFORMANCE logger is great at exposing the root cause.
With 6 seconds as source latency is there some sort of stress on the source environment? Is the task using Logminer or Replicate Log Reader? How close is your Replicate server to the Source Database?
For the Target, you have Handling latency which usually means there is a bit of a "traffic jam" because of volume, or it could be bad data being rejected ( check the task log with Target_apply verbose for 5 minutes only)
This sounds like classic Performance Tuning need, and as written above, The Professional Service Team is the team that assists with this.
Hope this gives you some insight!
Sincerely,
Barb