Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
HI Team,
Currently Qlik replicate child task is enabled with Stored changes only for CDC processing.
We are keeping global policy from Apply conflicts and using Upsert mode(for duplicates and inserts not found).Can we change this to Task policy for Stored changes.
Incase Global handling policy is needed, then still do we need upsert mode to be existed.
Will it cause delay(latency) while writing entries to attrep_changes table in Databricks.
Currently we are seeing latency while writing cdc data through Insert/SQL merge queries from Qlik->to s3->Databricks.
From error handling perspective between qlik-> databricks delta, Please let us know the recommendation for Apply Conflicts in this case.
Source:MS Sql server(MS-CDC)
Target:Databricks Delta
Thanks,
Sudarsan K
If I understand correctly, you are using store changes only and not apply changes. If so the settings for the Apply Conflict error handling settings are not relevant to your use case as all activity on the target will be inserts.
Please elaborate about your use case if I am mistaken.
Thanks,
Dana
Yes,you are correct Dana. In this case ,can i change Error handling from current 'Global policy' to 'Task policy' option.
Currently i have been observing Latency issues while sql query execuciton and writing cdc data from qlik replicate to databricks. We have increased databricks sqlwarehouse size to improve perfomrance but it did not work.
So from qlik task perspective, apart from Chang processing turning ,exploring all possible options to reduce latency time occuring in child qlik task while writing from qlik->S3->Databricks.
I Want to know changing this error handling from Gobal to Task policy will bring any benifit while writing entires to databricks.
If i keep Global policy setting in my qlik task which is stored changes only enabled, will it cause any delay or latency impact while writing entries from to Attrep apply changes table in Databricks or not.
I would like to know use of this Error handling option from latency perspective.Please clarify.
Since you are only using store changes and not apply changes, the task is not using batch apply mode and therefore is not using the attrep apply changes table.
Are there any errors or warnings in the task logs when you see latency?
Is the latency on the source or target side? See:
Latency / Performance Troubleshooting and Tuning f... - Qlik Community - 1734097
Beyond this:
Based on the description and details you have provided on this support case, we recommend getting our Professional Services team involved as the assistance you need is related to implementation/configuration/performance tuning which falls under the purview of Professional Services. Engaging Professional Services will ensure a thorough and pro-active setup of your environment with various architectural artifacts taken into account.
As part of the PS engagement (fee based), you will get a dedicated PS resource to work with you over Zoom/Teams or in person, who will be able to answer your questions and work with you one-on-one on the items you need assistance with. Please reach out to your Account Manager if you do not have Professional Services hours or are not already working with someone from that team.
You can also refer to these Community articles:
https://community.qlik.com/t5/Official-Support-Articles/How-to-contact-Qlik-Support/ta-p/1837529
Thanks,
Dana
HI Dana,
Thanks for the clarification.
One query here, my qlik replicate task is running with full load and Stored changes only enabled(Apply changes disabled) and is replicating changes to Databricks delta.
Here can i enable 'Apply changes using SQL Merge' from Change processin Pefromance turning and use it when the task is in Stored changes mode.
If it is, how it will impact data load into attrep_changes table and then write queries to Databricks delta tables
Please let me know.
Thanks,
Sudarsan K
The Change Processing Tuning settings are only relevant if you have Apply Changes enabled.
As you only have full load and store changes enabled, the only activity on the target will be inserts. Since no changes are applied to exiting rows, the setting "apply changes using SQL Merge" has zero effect.
Hope this helps!
Dana
Great,Ok.
The main problem here is, i have been getting Target_apply latency while qlik replicating changes via s3 staging to datbricks delta(Target).
CDC flow: MS Sql sever(MS-CDC)->parent and child qlik task(with stored changes enabled only)->staging s3 ->s3 destination (it will reflect in Databricks delta CT tables)
Once changes(inserts/updates/deletes) sent from Qlik child task
1. Qlik starts a new bulk apply ,then it starts to compress data before upload to S3 staging here.
2.Once the file copied/uploaded to S3 Staging, it will copy into attrep_changes table and from there it will write/execute queries to CT tables into Databricks delta .
I can see one attrep_changes table generated in Databricks delta for each qlik task.
Here Target Latency seems accumulating in the Target Apply- Staging phase initially and continue to increase till query execution to databricks is completed
• During gzip compression of the bulk CSV file and
• Copy/Upload to S3 staging area before Databricks load.
Please find the attached Traced logs generated (on 07/11 post 5:38am)for Qlik child task.
Please let me know whether this compression and upload to s3 contributing to latency increase here or not.
In such case, can i disable this compression from qlik target endpoint(databricks delta) to improve latency.
I summarized the steps in the attached trace logs (as per my understanding) below. Please have a look.
1.Qlik task flushed in-memory CDC buffer to file and starts a new bulk apply.
This is part of preparing Databricks staging for new bulk load.
2.Qlik starts to compress data before upload to S3.
3.Compression stage. CPU-bound, single-threaded.
4.Upload to S3 staging and load into Databricks.
5.Latency spike starts during compression/upload phase (before apply).
6.Apply to Databricks(Inserts execution) finished quickly — Databricks side is efficient.
7.Again CPU and I/O stage.
8.S3 upload + staging.
9.Latency increased another ~30s from previous measurement.
10.Apply again is fast — not the cause of latency.
11.Pattern repeats.
From qlik side, Kindly let me know the needful to reduce Target_Apply Latency seems occuring at S3 staging level and while executing queries to Target.
Thanks,
Sudarsan K
The log indicates the task is configured for full load and apply changes, not store changes. That is why you are seeing activity involving the attrep_changes table.
Please note - if there are no errors or data applying in one-by-one mode (bulk load breaking) this may be something you need to work on with our Professional Services team (fee based).
To do a proper evaluation of the issue and if it falls to Support or Professional Services, please open a support case and provide the diagnostics package for the task.
Also:
During a time that you are experiencing latency or performance issues:
a. On the task's Monitoring tab, Tools drop-down menu, select Log Management.
b. On the screen that opens, please ensure the box "Store trace/verbose logging in memory, but if an error occurs write to the logs" is NOT checked. Scroll down to the following items and set them over one position to the right, Trace. Click OK.
c. The change will take effect immediately (no need to stop/resume the task).
d. The task log will be fairly large so limit this to only 20-30 minutes before returning these logging levels back to Info, then download the log(s) from the View Logs screen (not in a diagnostics package as they will be truncated), zip them and upload it to this case.
PERFORMANCE
SOURCE_CAPTURE
TARGET_APPLY
Thanks,
Dana
HI Dhana,
All my qQik production tasks are with full load and stored changes enabled only and running .Change processing is disabled here.
Please find my qlik task screenshots attached here.
You are mentioning that activity involving the attrep_changes table in the logs will appear only if Apply changes are enabled.In that case why i see activities involving the attrep_changes table in the qlik logs and also in performance tuning settings Batch tunning and Transaction offload tunining values are enabled
If Change processing Tuning is not relavant(if stored changes only enabled) ,then why all options in this batch optimized apply are enabled and editable for me.
If peformance tuning is not applicable for Stored change enabled task ,then how can i configure tuning values to improve Target latency here.
Please clarify on this.
Note: I am using qlik replicate Nov 2024 version here.
Thanks,
Sudarsan K
I apologize; your screen captures clearly show that store changes is enabled and apply changes is not enabled. The log file had stated that the task was running full load and CDC which I misinterpreted.
00059715: 2025-11-06T10:59:25 [TASK_MANAGER ]I: Task 'Child_RMS_LS_Db_RMS' running full load and CDC in resume mode (replicationtask.c:1846)
As noted in my earlier post:
If there are no errors in any of the logs (there are none in this log you attached) this may be something you need to work on with our Professional Services team (fee based) for help with performance tuning.
To do a proper evaluation of the issue and to confirm if it falls to Support or Professional Services, please open a support case and provide the diagnostics package for the task and an enhanced task log per the details I provided in my last post.
Thanks,
Dana