Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
ALERT: The support homepage carousel is not displaying. We are working toward a resolution.

Qlik Replicate batch optimized apply mode behaviors

100% helpful (1/1)
cancel
Showing results for 
Search instead for 
Did you mean: 
john_wang
Support
Support

Qlik Replicate batch optimized apply mode behaviors

Last Update:

Dec 1, 2025 9:52:01 AM

Updated By:

Sonja_Bauernfeind

Created date:

Dec 1, 2025 9:52:01 AM

With an RDBMS target endpoint such as Oracle, where Change Processing Mode is set to Batch optimized apply and both Apply Changes and Store Changes are enabled, paired INSERT and DELETE operations may be merged during batching.

This results in no change being applied to the target.

In Batch Optimized Apply mode, Qlik Replicate commits changes in batches. Before applying these batches, Qlik Replicate performs preprocessing to group transactions as efficiently as possible. Events sharing the same primary key (or RRN) may be merged as follows:

DELETE + INSERT  →  UPDATE
INSERT + DELETE  →  IGNORE

In more detail:

Batch optimized apply mode, Qlik Replicate merges INSERT/DELETE pairs so that the net result is “no-op” at the target. However:

  • These operations are still fully recorded in the Store Changes tables
  • They also appear in the GUI → Applied Changes view
  • This sometimes leads users to believe that operations (e.g., a DELETE) were “lost,” but this behavior is expected and designed into Batch optimized apply

To avoid merge-related side effects during sensitive processing windows, temporarily switch the task to Transactional apply.
Transactional apply enforces the exact sequence of operations.

Once the critical period has passed and the task has stabilized, you can safely switch back to Batch optimized apply to maintain performance.

Scenarios and Workarounds

Below are two scenarios outlining how this behavior appears in practice, as well as workarounds.

Scenario One: Stop Task and Restart from Timestamp

  1. Create a task with full load, apply changes, and store changes enabled.
  2. Start the task and let Replicate create the target table.
  3. Insert one row into the source at 10:00 AM → both source and target now contain one row.
  4. Stop the task.
  5. Delete the row from the source.
  6. Restart the task from 10:00 AM (or slightly earlier).
    (Current time is 10:10 AM)

Observed Behavior

One row remains in the target even though it was deleted in the source. All changes, however, are correctly recorded in the store changes tables.

Workaround

Before step 6, switch the task to Transactional apply. After changes up to step 4 are applied, switch back to Batch optimized apply.

 

Scenario Two: Stop Task and Resume

  1. Stop a CDC task. Assume both source and target are empty.
  2. Insert one row into the source.
  3. Run a Full Load only task → both source and target now have one row.
  4. Delete the row in the source.
  5. Resume the CDC task.

Observed Behavior

One row remains in the target. All changes are still correctly logged in the store changes tables.

Workaround

Before step 5, switch to Transactional apply. After the changes from step 3 are applied, switch back to Batch optimized apply.

 

Notes

  • During the Transactional apply period, some INSERT events may be captured more than once. Duplicate rows will be discarded by the target primary key
  • If the target table has no primary key, duplicate rows may need to be removed manually
  • Setting the target endpoint to UPSERT mode may help reduce duplicates
  • Some endpoints (e.g., Snowflake) do not support Transactional apply. In such cases, a reload may be required

 

Environment

  • Qlik Replicate
Labels (1)
Version history
Last update:
Monday
Updated by: