Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Join us at Qlik Connect 2026 in Orlando, April 13–15: Register Here!
cancel
Showing results for 
Search instead for 
Did you mean: 
QlikUser2026
Contributor II
Contributor II

Performance issues with high-volume deletions in CDC

We are currently experiencing significant performance issues related to batch deletions coming from our source databases.

In one recent case, we had approximately 500k delete operations and 1.8 million change events, which resulted in a processing delay of around 24 hours.

We are using CDC replication and would like to understand whether there are recommended strategies or configuration options to improve deletion processing performance in such high-volume scenarios.

Are there known best practices, tuning parameters, or architectural approaches that could help reduce latency for large batches of deletes?

5 Replies
Dana_Baldwin
Support
Support

Hi @QlikUser2026 

There could be multiple things causing this.

What are the source and target endpoint types? If the target is an RDBMS, are there PK indexes on the tables?

Is the delay at the source or target? You can increase logging to Trace for Performance to capture this in the task log.

Please refer to this knowledge article: Latency / Performance Troubleshooting and Tuning f... - Qlik Community - 1734097

If there are errors in the log files, open a support case for more help if needed. If it is a new task, or a new endpoint, and there are no obvious errors or problems in the environment (sever specs, network bottlenecks) we may refer you to your Customer Success Engineer (if you have one) or our Professional Services team. For more info on those items see:

https://community.qlik.com/t5/Official-Support-Articles/How-and-when-to-contact-Qlik-s-Professional-...

 

https://community.qlik.com/t5/Official-Support-Articles/How-to-contact-Qlik-Support/ta-p/1837529

Thanks,

Dana

QlikUser2026
Contributor II
Contributor II
Author

Hi Dana,

thanks a lot for your reply.

During the incident we observed multiple .tswp files being created with noticeably varying file sizes. The initial files were significantly larger (around 5,079 KB), while later files were much smaller (around 500 KB).

Do you have an idea what could cause this behaviour?

Our assumption is that it might be related to temporary buffering or transaction handling, but we could not find clear documentation about how Qlik internally segments or sizes .tswp files.

Specifically, we are trying to understand:

  • Are .tswp file sizes influenced by source transaction commits?

  • Could large source transactions cause larger .tswp files initially?

  • Or is the size mainly driven by target apply/commit behaviour?

Any insight into the internal logic or practical experience would be highly appreciated.

Thanks in advance!

SachinB
Support
Support

Hello @QlikUser2026 ,

TSWP files (Transaction Swap) in Qlik Replicate are temporary files created in the \sorter directory when memory limits are exceeded by large, long-running, or uncommitted transactions, ranging from 1 KBs to Several GBs.

 

Answer -->. If a source transaction is uncommitted, Qlik must keep those changes in a .tswp file until the COMMIT is seen.

If you have a massive, long-running transaction on the source, the .tswp file will continue to grow until that transaction finishes or reaches an internal threshold.

If the target is slow to acknowledge commits, the .tswp files will accumulate and grow.

You can find the below Qlik community links for same sorter file discussion.

TSWP-Files 

Generation-of-large-sorter-files 


Regards,

Sachin B

 

 

QlikUser2026
Contributor II
Contributor II
Author

Hi Sachin,
thanks again for the explanations regarding .tswp files — this already helped a lot.
We investigated further and observed an additional behaviour that we would like to understand better, as it might help identify the real bottleneck.

During a recent incident:
* CDC task continued running (no failure)
* apply activity was still visible in the logs
* activity_log / activity_log_ct continued receiving entries
however, no new entries were written to attrep_history for ~45 minutes


After restarting the task:
* first entry showed timeslot_duration = 0, latency = 0
* next entry immediately showed very high latency (backlog processing)
* task then caught up normally


At the same time:
* .tswp files were created (large files first, then smaller ones)
source side did not show long-running transactions
* workload was mainly INSERT-heavy (not update-heavy)
* target is Azure SQL Database and no obvious resource spikes were visible.


From the behaviour it looks like:
* capture was still active
* apply activity existed
* but progress somehow stalled (no attrep_history updates).

Are there known cases where apply continues partially but monitoring/control tables stop updating until task restart? 

Thanks again for your support.

SachinB
Support
Support

Hello @QlikUser2026,

When a Qlik Replicate task is stopped, it stores the last processed SCN/LSN (System/Log Change Number) for both the source capture and the target apply positions.

When the task is resumed, Replicate continues processing from the last saved checkpoint, ensuring that no data is reprocessed or skipped.

While the task is in a stopped state, Replicate does not scan or capture any new changes from the source database. Data capture resumes only after the task is restarted.

It would be helpful if you could raise a support case and attach the relevant logs when the issue occurs.

With the required logs available, we will be able to analyze the behavior in detail and provide more accurate findings and recommendations.

Regards,

Sachin B