Skip to main content
Announcements
Live today at 11 AM ET. Get your questions about Qlik Connect answered, or just listen in. SIGN UP NOW
cancel
Showing results for 
Search instead for 
Did you mean: 
hanna_choi
Partner - Creator
Partner - Creator

I want to know the tuning point about Qlik Replicate.

Dear.

I need your help.

Is the PK, FK relationship a problem?
Am I doing the wrong task configuration of the task?

 

I have latency issue.

  • Source : Oracle 10g(On-Premise)
  • Target : AWS RDS for Oracle 19c

Source Environment: 

Schema : DBUSER1
Table : 434개
Segment Size : 357GB
Comment : Some tables reference some tables in DBUSER3

Schema : DBUSER2
Table : 360개
Segment Size : 85GB
Comment : PK, FK

Schema : DBUSER3
Table : 396개
Segment Size : 161GB
Comment : PK, FK

Qlik Replicate Config:

• Task : 3 (one per source DB schema)
• Only Apply Changes
• Disable Log Stream
• Change Processing Mode: Transactional apply
• The rest is Default

 

Issue : 

• DBUSER2, DBUSER3: no issue
• DBUSER1: It has been delayed since CDC started, disk full due to increased temp file on Replicate server

Labels (1)
2 Solutions

Accepted Solutions
KellyHobson
Support
Support

Hey @hanna_choi ,

Thanks for posting on the Community page!

From your post,  "Change Processing Mode: Transactional apply" stands out as this is processing changes 1 by 1 which can contribute to latency. 

Is it possible to enable/take advantage of batch mode processing?

Additionally, please reference this article for additional steps for troubleshooting latency. 

https://community.qlik.com/t5/Knowledge/Troubleshooting-Qlik-Replicate-Latency-and-Performance-Issue...

Thanks, 

Kelly

View solution in original post

lyka
Support
Support

Good day!

 

In addiiton, please also consider the following;

1. How many tasks are using the same source endpoint? It multiple, consider using logstream

2. How close is the source database from the replicate server?

3. Are there any errors that are causing the latency?

 

Performance tuning requires some trial and error and its best to engage our Professional Services team to assist you.

 

As a start, you can also refer to his link for some info on change processing tuning parameters:

https://community.qlik.com/t5/Knowledge/General-understanding-of-Qlik-Replicate-Change-Processing-Tu...

 

Thanks

Lyka

View solution in original post

8 Replies
KellyHobson
Support
Support

Hey @hanna_choi ,

Thanks for posting on the Community page!

From your post,  "Change Processing Mode: Transactional apply" stands out as this is processing changes 1 by 1 which can contribute to latency. 

Is it possible to enable/take advantage of batch mode processing?

Additionally, please reference this article for additional steps for troubleshooting latency. 

https://community.qlik.com/t5/Knowledge/Troubleshooting-Qlik-Replicate-Latency-and-Performance-Issue...

Thanks, 

Kelly

lyka
Support
Support

Good day!

 

In addiiton, please also consider the following;

1. How many tasks are using the same source endpoint? It multiple, consider using logstream

2. How close is the source database from the replicate server?

3. Are there any errors that are causing the latency?

 

Performance tuning requires some trial and error and its best to engage our Professional Services team to assist you.

 

As a start, you can also refer to his link for some info on change processing tuning parameters:

https://community.qlik.com/t5/Knowledge/General-understanding-of-Qlik-Replicate-Change-Processing-Tu...

 

Thanks

Lyka

hanna_choi
Partner - Creator
Partner - Creator
Author

Hi Kelly

Many tables with an FK were composed of one task.
Can I use batch mode for this task?

 

Best Regard.

hanna.choi

hanna_choi
Partner - Creator
Partner - Creator
Author

Hi Lyka

 

1. How many tasks are using the same source endpoint? It multiple, consider using logstream
- 3 tasks are using the same source endpoint.
- The performance of reading logs from the source is ok.
There were too many redo log temp files on the Replicate server. so it was happen a disk pool.
We think It seems that the performance of reflecting on the target is slower than the performance of reading logs from the source.
In this case, would applying a logstream be effective in performance?

2. How close is the source database from the replicate server?
- source database : onpremise (data center)
- Replicate server : AWS (Region : korea)
- Target database : AWS (Region : korea)


3. Are there any errors that are causing the latency?
- latency was long without errors.

 

Best Regard.

hanna.choi

KellyHobson
Support
Support

Hey @hanna_choi 

Can you confirm if the tables have a PK or unique index?

From our User guide: " Changes to tables without a Unique Index or Primary Key will always be applied in Transactional apply mode."

https://help.qlik.com/en-US/replicate/May2022/Content/Global_Common/Content/SharedEMReplicate/Custom...

For transactional apply, each transaction is applied individually in the order its committed. Because it does not batch the transactions it is not as efficient as batch optimized apply mode.

Thanks,

Kelly

lyka
Support
Support

Hi @hanna_choi!

 

It is better to have the replicate server close tot he source database. As for logstream,  this will help reduce the overhead on the source database so i will still go for it but make sure that you test any changes before moving to production

 

here is an article from our community about logstream. hope it helps!

 

https://community.qlik.com/t5/Knowledge/Log-Stream-Staging-The-Why-and-the-How/ta-p/1712077

 

Thanks

Lyka

 

 

Michael_Litz
Support
Support

HI,

If the FK are on the source side database then it will not matter to replicate. If the FK are on the target side database then they could cause errors (running in batch mode) in the task if one of the FK constraints is violated. With FK on target table you will want to maintain Referential Integrity by using transactional apply and this would cause the latency on the target side to go up.

If you do not need the FK on the target tables then you could remove them and run in batch apply mode.

Thanks,
Michael

Heinvandenheuvel
Specialist II
Specialist II

As Michael en Kelley indicate, You really one should really TRUST the source DB for the FK checking and NOT have Foreign Keys  constraints on the target in order to allow batch mode CDC processing if there is any performance concern.

You only provided STATIC DB information whereas the problem you try to get help for is 100% dynamic.

It is just about completely irrelevant how many tables there are or how big the aggregated DB is for CDC performance and tuning. The only argument for size is that a large size suggest a high insert rate as the data must have come from somewhere, but no timeline is given.

You need to know change rates! 100/sec - 1000? 10,000? 100,0000/sec ?  A very coarse guess for the maximum for transactional processing is in the 1000 per second per task range but 100,000/sec for batch processing. Of course this depends on many factors and your specific maximum can be very different (better or worse)

One good indication for change rate that is the REDO generation speed. How many MB/second on average ?

A better indication is to just run the output to a NULL device and/or Batch Mode target with NO FK constraints enabled during your UAT testing driven by the real source and UAT target.

While it is nice to develop and evaluate on a per-schema basis, for final performance it is likely to be desirable to deal with all schemas in a single tasks. If FK are an absolute, unavoidable, requirement for certain target tables it is better to split the tables into 2 tasks between FK needed in TX task and no-FK in batch task irrespective of the schemas.

Send money for more and better help! 

Good luck,

Hein.