Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Streamlining user types in Qlik Cloud capacity-based subscriptions: Read the Details
cancel
Showing results for 
Search instead for 
Did you mean: 
PriQlikDBA
Contributor II
Contributor II

Qlik Rreplicate Throughput is slow

Qlik Replicate Throughput is slow for past couple of days.

Source: DB2 for zOS

Target: DB2 for LUW

Task: Daily Full Load Refresh.

For past few days, daily full load refresh is taking extra ordinary longer time than the normal ones. One the throughput has drastically came down, second, throughput is not consistent, like goes down to 0 and comes as 3128 then goes down once again to 0. Earlier if 4 tables refresh are happening in a refresh task, the throughput might around 80k records/sec (distributed across 4 tables). now it is like 8207 for one table and remaining tables are 0. We rebounced the target luw server to flush all the cpu resources and memory buffers. After restart as well we are still noticing poor throughput performance. No changes were made in the source and target end. Memory/CPU are within the limits in both Qlik Replicate Server as well the target end (DB2 for LUW). 

Please let us know how to address this issue.

Thank you,

Raghavan Sampath

 

Labels (1)
1 Solution

Accepted Solutions
PriQlikDBA
Contributor II
Contributor II
Author

Thank you all for your wonderful support and guidance. Issue got resolved.

Source: DB2 for zOS

Target : DB2 for LUW

Qlik Replicate Server (Windows).

1. As suggested logging enabled to VERBOSE for Source_Capture and Target_Load

2. Checked the CPU resource constraint (if any) in the target end. Nothing alarm noted

3. Checked the Qlik Replicate server for CPU and Memory utilization (4 Core CPU and 64GB RAM). Nothing alarm noted.

4.Tweaked the refresh task (like reducing the commit to 10000 records, 1 table @ a time in full load tuning parameters). Nothing worked out.

5. Finally, went into source end analysis (DB2 for zOS). Qlik calls are established using the UDF (User Defined Function) and a dedicated WLM address space. To begin with, noted there were too many long reader messages in the DB2 for zOS message log. Then stopped the Qlik UDF to flush all the caches and recycle the process and restarted to begin with new connections. That helped.

6. Not sure, it is right analysis, it worked in fact, the throughput was excellent. We transfer 10 to 12 billion records during weekend. and 3  to 4 billion records on daily basis.

Thank you once again for all your support and guidance.

Raghavan Sampath

 

 

View solution in original post

3 Replies
DesmondWOO
Support
Support

Hi @PriQlikDBA ,

Thank you for reaching out to the Qlik Community.

Please enable VERBOSE logging on SOURCE_UNLOAD and TARGET_LOAD. This will allow you to review the details involved in transferring data from the source table to the target table, helping you identify whether the issue lies with the source or the target.

Regards,
Desmond

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
Dana_Baldwin
Support
Support

Hi @PriQlikDBA 

In addition to Desmond's suggestion, please check with your network team to ensure throughput is good from source to Replicate and from Replicate to target.

Please also confirm that there is no index fragmentation on the source that might slow down reading of the data.

If you need more detailed assistance, please open a support case. This allows us to get other teams involved as needed (R & D, Professional Services).

Thanks,

Dana

PriQlikDBA
Contributor II
Contributor II
Author

Thank you all for your wonderful support and guidance. Issue got resolved.

Source: DB2 for zOS

Target : DB2 for LUW

Qlik Replicate Server (Windows).

1. As suggested logging enabled to VERBOSE for Source_Capture and Target_Load

2. Checked the CPU resource constraint (if any) in the target end. Nothing alarm noted

3. Checked the Qlik Replicate server for CPU and Memory utilization (4 Core CPU and 64GB RAM). Nothing alarm noted.

4.Tweaked the refresh task (like reducing the commit to 10000 records, 1 table @ a time in full load tuning parameters). Nothing worked out.

5. Finally, went into source end analysis (DB2 for zOS). Qlik calls are established using the UDF (User Defined Function) and a dedicated WLM address space. To begin with, noted there were too many long reader messages in the DB2 for zOS message log. Then stopped the Qlik UDF to flush all the caches and recycle the process and restarted to begin with new connections. That helped.

6. Not sure, it is right analysis, it worked in fact, the throughput was excellent. We transfer 10 to 12 billion records during weekend. and 3  to 4 billion records on daily basis.

Thank you once again for all your support and guidance.

Raghavan Sampath