Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Join us at Qlik Connect 2026 in Orlando, April 13–15: Register Here!
cancel
Showing results for 
Search instead for 
Did you mean: 
JorgeManrique
Contributor II
Contributor II

Troughtput lows

Hello, I am currently working with Qlik Replicate version (2023.11.0.597).
We have a DB2 LUW source and a Confluent endpoint as the broker. This month we have experienced several issues during data replication to Confluent. When massive changes are registered in DB2 LUW (millions of records) in a very short period of time, the performance starts to drop.
To replicate 35 million changes generated within approximately half an hour, it takes nearly 12 hours to apply them. This volume of data does not occur every day, but there are specific days when it does, and it is causing extremely low replication performance, as well as generating many hours of latency on the target.
Is there any option within Qlik to mitigate this loss of performance during replication?
Labels (2)
5 Replies
DesmondWOO
Support
Support

Hi @JorgeManrique ,

When 35 million changes are generated, the Qilk Replicate process must first wait for the commit before it can begin applying data. Once the data is received, Qilk Replicate needs to break it down to fit the target table structure. Because such a large transaction arrives all at once, Qlik Replicate will offload changes to disk, which inevitably impacts performance.

In addition, update and delete operations require the target database to locate the relevant records before applying changes, which adds further overhead. This is why applying all the data takes considerable time.

Overall, the latency is likely to be concentrated on the target side. To confirm, you can enable TRACE logging on the PERFORMANCE logger and verify where the delays occur.

Regards,
Desmond

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
JorgeManrique
Contributor II
Contributor II
Author

Hello, in this case the latency comes from the target; it accumulates a large amount of latency until the data finishes replicating. On the other hand, is there any way to compress this data at the source to try to make replication faster? Does this amount of incoming data, which logically has to be written to disk because it does not have instant replication capacity, really cause the operations per second to drop from thousands to barely 300 ops/s?
DesmondWOO
Support
Support

Hi @JorgeManrique ,

I’m not entirely sure what you mean by “compress this data at the source to try to make replication faster.” If your target is a cloud database endpoint such as Snowflake, Qlik Replicate does compress the CSV files before uploading them to the target. For other endpoints like relational databases, Qlik Replicate will implement DML statements directly, without compression.

Qlik Replicate applies changes to the target using DML statements. For example, if a maintenance job performs a very large UPDATE, performance will be impacted because the database must locate and update each record. In terms of replication capacity, I recommend discussing with your DBA to review transaction design and database tuning. You can also enable verbose logging on TARGET_APPLY to see in detail how Qlik Replicate is applying changes to the target.

Regards,
Desmond

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
JorgeManrique
Contributor II
Contributor II
Author

What I mean is whether Qlik Replicate has the capability to compress the data it reads from the DB2 log before sending it to the broker, since this way the data size is smaller and the replication is faster.
john_wang
Support
Support

Hello @JorgeManrique ,

Qlik Replicate utilizes the IBM DB2 ODBC Driver to transfer data between DB2 Server and DB2 Client, in this case Replicate is the DB2 client. If the ODBC Driver provides the compression functionality then Replicate can gain the advantage from it. Unfortunately it seems the IBM DB2 ODBC Driver does not provide such a capability yet.

thanks,
John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!