Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Qlik Connect 2026 Agenda Now Available: Explore Sessions
cancel
Showing results for 
Search instead for 
Did you mean: 
iti-attunity-sup
Partner - Creator III
Partner - Creator III

Regarding Transaction Offload Tuning.

Hello Experts

I have a question regarding Transaction Offload Tuning.

My customer has encountered performance problems while updating large amount of data as below:
- 1st transaction updates 309,229 rows
- 2nd transaction updates 144,945 rows

CDC didn't finish after over an hour.
Then the customer gave up and eventually stopped the task.

I noticed that 'Information about Incoming Changes' shows high number of rows as 'On Disk' and I wonder it is one of the reasons of the performance.

itiattunitysup_0-1724050545896.png

Questions:
1. In this situation, setting higher values in 'Transaction Offload Tuning' (Total transactions memory size exceeds (MB)/ Transactions duration exceeds (seconds) )effective?
I think it is required enough memory is available.

2. Are there any other Qlik Replicate settings to improve this kind of performance?

3. Is there any good ideas when we update large number of rows in 1 transaction and we would like to apply to the target as soon as possible?

For example, is it effective to reload the table in question manually?

 

Any advice would be appreciated.

Regards,
Kyoko Tajima

Labels (1)
3 Replies
john_wang
Support
Support

Hello Tajima-san, @iti-attunity-sup 

Please have a look at Troubleshooting Qlik Replicate Latency and Performance Issues.

Hope this helps.

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
DesmondWOO
Support
Support

Hi Tajima san , @iti-attunity-sup ,

Replicate offloads transactions from memory to disk when the total transaction memory exceeds a configured threshold or the transaction duration exceeds a configured time limit. 

You can configure them through the Change Processing Tuning.

DesmondWOO_0-1724054445540.png

Regards,
Desmond

 

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
Heinvandenheuvel
Specialist III
Specialist III

>> I have a question regarding Transaction Offload Tuning.

No you don't. Well, you do, but it is not a relevant question for the problem you are trying to solve.

Did it ever work properly? What changed?

>> I noticed that 'Information about Incoming Changes' shows high number of rows as 'On Disk' and I wonder it is one of the reasons of the performance.

No, it is not the cause, it is the effect. It shows something is wrong applying the changes.

>> 1. In this situation, setting higher values in 'Transaction Offload Tuning' ... effective?

NO. When applying takes more than a few minutes, it is no longer relevant whether the changes are stored on disk or in memory. Agreed? Tuning how soon (time, space) it is offloaded will not improve on that hour.

>> 2. Are there any other Qlik Replicate settings to improve this kind of performance?

Potentially, notably "Batch Apply" and possibly Data Error handling. 

For other to help you with that you need to provide more basic info. What is the target DB? Is the task in Transaction apply mode (I have the feeling it is) - if so: WHY? What is happening during that hour A) anything in the task log? B) Using target DB tools, what is happening there? Singleton stores? C) Using the other screens: Apply Changes, Apply throughput; is anything happening at al?  D) Increasing ATARGET_APPLY to TRACE in LOGGING for 5 minutes - what is showing up?  E) If nothing showed up, increase TARGET_APPLY LOGGING to VERBOSE for a minute... anything?

The most common issue for slow applies is error handling. What is the apply error handling chosen? In bulk  mode, the default is that if even just 1 row fails, the whole bulk will be re-played one row at a time to identify the failing row. If that is happening and desirable then limiting the bulk size is actual better such that for example 9 times 1K rows succeed and 1 times 1 K row is done slowly versus 1 times 10K rows failing and all of the being retried

>> 3. Is there any good ideas when we update large number of rows in 1 transaction and we would like to apply to the target as soon as possible?

Look carefully at the "Apply Conflicts Handling Policy" for the task - for the system.

Look to see what the heck Replicate and the Target DB are actually doing during the slow apply!

 

>> For example, is it effective to reload the table in question manually?

Likely. Even if the table is 10x larger than the changes to be applied that is likely faster.  But that is not a solution, merely a temporary workaround.

Hein.