Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Qlik and ServiceNow Partner to Bring Trusted Enterprise Context into AI-Powered Workflows. Learn More!
cancel
Showing results for 
Search instead for 
Did you mean: 
Rueda
Contributor II
Contributor II

Row by row delete in Azure Database with z/OS source

We are currently using Qlik Replicate November 2024, with a z/OS source and Azure SQL Database as the target, and we are experiencing significant performance issues and delays during large-scale delete operations.

Specifically, when bulk deletes are triggered on the source, the replication process appears to handle them as row-by-row deletes on the target side, which is causing major latency and performance degradation.

Our task is configured in Transactional (CDC) mode, and switching to Batch mode is not an option due to business requirements.

We would like to understand:

  • Is there any configuration, optimization, or best practice to avoid row-by-row delete processing in this scenario?
  • Are there alternative approaches (e.g., bulk delete handling, tuning parameters, or target-side optimizations) to improve performance?
  • Has anyone faced a similar issue with z/OS → Azure SQL Database replication and found an effective workaround?

Any insights or recommendations would be greatly appreciated.

Labels (3)
1 Solution

Accepted Solutions
john_wang
Support
Support

Hi @Rueda ,

This appears to be more of a tuning-related issue. Since I’m not fully familiar with your environment and current task configuration, some possible tuning approaches could include:

  1. Place the critical tables in a Transactional Apply task

  2. Move less critical or high-volume tables into a separate Batch Apply task

  3. Test different delete handling modes and apply settings

However, as any recommendation may be inaccurate without a complete understanding of the environment and configuration, we would recommend engaging PS for a deeper review and tuning assessment.

thanks,

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!

View solution in original post

3 Replies
john_wang
Support
Support

Hello @Rueda ,

This behavior looks reasonable to me:

  1. Transactional (CDC) mode can be slow, especially when the target table does not have a Primary Key or Unique Index.

  2. Deletes are processed row-by-row. This is by design and not considered a defect.

I’d like to better understand why Batch mode is not an option in this scenario.

thanks,

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
Rueda
Contributor II
Contributor II
Author

Client want to ensure the transactions in the same order as they are created in Source system.

john_wang
Support
Support

Hi @Rueda ,

This appears to be more of a tuning-related issue. Since I’m not fully familiar with your environment and current task configuration, some possible tuning approaches could include:

  1. Place the critical tables in a Transactional Apply task

  2. Move less critical or high-volume tables into a separate Batch Apply task

  3. Test different delete handling modes and apply settings

However, as any recommendation may be inaccurate without a complete understanding of the environment and configuration, we would recommend engaging PS for a deeper review and tuning assessment.

thanks,

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!