Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello everybody
We use Qlik Replicate to load various tables from Oracle to Postgres.
For further processing in Postgres, we have established a column with the timestamp of the transfer/commit, for example, qlik_change_ts.
However, using $AR_H_COMMIT_TIMESTAMP leads to massive performance losses during data transfer.
Is there a simple trick to prevent slowing down the transfer? Alternatively, $AR_H_COMMIT_TIMESTAMP can be removed during the transfer, although this requires extensive adjustments to the subsequent processes.
Regards
Alex
Hello @al3x ,
This appears to be the first reported case where the transformation $AR_H_COMMIT_TIMESTAMP is causing significant performance degradation. Typically, performance issues are related to either the target side being unable to handle the incoming data or the source-side capture process.
In this situation, I recommend opening a support ticket with the appropriate logging level enabled. Additionally, please clarify whether the issue occurs during the Full Load phase, the Change Processing phase, or both.
Hope this helps.
John.
Hey John
Thanks for your response. Which logging level do you need exactly?
For us only the full load matters during change proessing it's less an issue.
Regards
Alex
Hello @al3x ,
Thanks for the update.
We need to set PERFORMANCE / SOURCE_UNLOAD / TARGET_LOAD / TRANSFORMATION to Trace in order to analyze the task log files. However, if the behavior occurs only during the Full Load phase, the task may be optimized by enabling features such as Parallel Load etc.
Hope this helps.
John.