Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Streamlining user types in Qlik Cloud capacity-based subscriptions: Read the Details
cancel
Showing results for 
Search instead for 
Did you mean: 
user2828
Partner - Contributor III
Partner - Contributor III

High Latency in Change Processing – Help Understanding Memory vs Disk Buffers

Hi everyone,
I’m working on a Change Processing task in Qlik Replicate and I’m trying to understand how the “on memory” and “on disk” buffers behave on both the source and target sides.

What I’ve noticed is that when latency increases, the number of records sitting in memory or moving to disk changes a lot. I want to understand the following:

  1. What exactly does it mean when records are held “on memory” during change processing?

  2. When do records move “on disk,” and how does this affect CDC latency?

  3. If a task is showing more records on disk, what are the typical causes?

  4. What tuning steps or settings should I look at to prevent disk spillover and reduce latency?

I’m hoping to get clarity on how these buffers work internally and what practical tuning fixes I should apply when memory buffers fill and tasks start writing to disk.

Any detailed explanation or tuning guidance would really help. Thanks!

1 Solution

Accepted Solutions
DesmondWOO
Support
Support

Hi @user2828 ,

1. To know if source or target has the issue.

Enable TRACE/VERBOSE logging level on the PERFORMANCE logger. This will help you determine whether the bottleneck is occurring at the source system or the target system.

2. If its issue reading the source( if too much of logs get generated when an activity has occurred at source). how to tune replicate to ensure faster reads.
3. If the issue is writing to source , what tuning needs to be done.

Tuning depends on multiple factors such as source system configuration, network throughput, and workload characteristics. For detailed guidance tailored to your environment, please engage with our Professional Services (PS) team.

4. Main thing I want to know is how can I understand these things properly,( like if possible with logging levels, read the logs), And how tuning the task to help increase throughput and reduce latency.

For CDC, you can enable TRACAE level logging on the SOURCE_CAPTURE, TARGET_APPLY and PEROFRMANCE to measure the throughput.

Regards,
Desmond

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!

View solution in original post

3 Replies
DesmondWOO
Support
Support

Hi @user2828 ,

1. What exactly does it mean when records are held “on memory” during change processing?
Qlik Replicate keeps change data in memory while a transaction is being processed. Once the transaction is fully committed on the source and/or applied to the target, the data is released. If a transaction exceeds the available memory or remains uncommitted beyond the configured time threshold, Replicate will automatically offload that data to disk.

For more details, please refer to the user guide.


2. When do records move “on disk,” and how does this affect CDC latency?
Because disk I/O is slower than memory access, this can increase CDC latency. The key is understanding why Replicate is offloading changes to disk.


3. If a task is showing more records on disk, what are the typical causes?
- Changes are queued and waiting to be applied to the target.
- Very large or long‑running transactions, such as those generated by maintenance operations.


4. What tuning steps or settings should I look at to prevent disk spillover and reduce latency?-
- Reviewing table structures and primary key/unique index, especially tables containing LOB columns, which can significantly increase transaction size.
- Performing capacity and throughput testing to validate the performance of both the source and target endpoints.
- Network paths between Qlik Replicate sever, source, and target.


Regards,
Desmond

 

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
user2828
Partner - Contributor III
Partner - Contributor III
Author

Thanks @DesmondWOO for the explanation, I just need a few things because I want to properly do troubleshooting,
1. To know if source or target has the issue.

2. If its issue reading the source( if too much of logs get generated when an activity has occurred at source). how to tune replicate to ensure faster reads.
3. If the issue is writing to source , what tuning needs to be done.

Main thing I want to know is how can I understand these things properly,( like if possible with logging levels, read the logs), And how tuning the task to help increase throughput and reduce latency.

Thanks in advance.

DesmondWOO
Support
Support

Hi @user2828 ,

1. To know if source or target has the issue.

Enable TRACE/VERBOSE logging level on the PERFORMANCE logger. This will help you determine whether the bottleneck is occurring at the source system or the target system.

2. If its issue reading the source( if too much of logs get generated when an activity has occurred at source). how to tune replicate to ensure faster reads.
3. If the issue is writing to source , what tuning needs to be done.

Tuning depends on multiple factors such as source system configuration, network throughput, and workload characteristics. For detailed guidance tailored to your environment, please engage with our Professional Services (PS) team.

4. Main thing I want to know is how can I understand these things properly,( like if possible with logging levels, read the logs), And how tuning the task to help increase throughput and reduce latency.

For CDC, you can enable TRACAE level logging on the SOURCE_CAPTURE, TARGET_APPLY and PEROFRMANCE to measure the throughput.

Regards,
Desmond

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!