Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Save $650 on Qlik Connect, Dec 1 - 7, our lowest price of the year. Register with code CYBERWEEK: Register
cancel
Showing results for 
Search instead for 
Did you mean: 
harikesh_1991
Contributor
Contributor

Facing huge latency issues in my child tasks

Hello All,

We are in the process of replicating 1200+ tables from an Oracle ERP source to target Oracle datawarehouse. Since there are huge number of tables, we decided to use the logstream and child task setup for it. We split the logstream tasks into 4 buckets of volume of changes that we get in a day (Low,High,Medium and temp). The tables in temporary will be sorted to the other 3 buckets at a later point in time. Also the child tasks are divided into 18, with 8 dedicated task having one table each for high volume, 6 task each for Low volume, 3 task for Medium volume and 1 for temporary.
So the logstream task is running fine with no latency. However I see there are about 5 child tasks that have extremely high latency. I see that there are changes being accumulated at the disk in the target, close to 220,991 transactions and 2654 transactions waiting until target commit. There are no updates happening on the table. Initially I could find the error in target_apply  : "01984869: 2025-11-18T00:19:20 [TARGET_APPLY ]T: Failed to execute statement. Error is: ORA-20999: 0 rows affected ORA-06512: at line 1 ORA-06512: at line 1
~{AgAAACJPDXuDj7NuzKKZuhSA2i723IjjATLTcyL3ERn8+QUzNE8RMERqiUYA7kOMnOaX3Wc4W281UX8IYxudA3YrCAEVr5+i0nB5RsPT7yk48GiwfHqoQpYV7luwBuxIIyMYI5MFJmByPPt2Z2ZbsMlhESBsVz"

But after applying upsert mode in apply conflict tab, I see this error no longer exists, but now I see the below message in target_apply : 

Got non-Insert change for table 200 that has no PK. Going to finish bulk since the Inserts in the current bulk exceeded the threshold

Could you please advise how can I resolve this issue? 
I have attached the screenshot of the UI and also the task logs and DR package.

Can anyone help to fix this?

Labels (2)
3 Replies
Dana_Baldwin
Support
Support

Hi @harikesh_1991 

If there are no PK's defined on the target table every update & delete operation will cause a full table scan which is exponentially slower.

Please define a primary key for these tables on the target.

Thanks,

Dana

harikesh_1991
Contributor
Contributor
Author

Hello Dana,

Thanks a lot for your insights on this issue. However when I checked the source tables that we are replicating, which is about 1200+ tables and only very few tables have primary keys defined, but most of them have unique indexes present.

Could you please advise if adding these unique indexes at the target can help in improving the performance of the update/delete operations?

I tried adding unique indexes at the target for few tables and I could see that the latency reduced, however I just want to be 100 % sure before I can implement this as a solution for the rest of the child tasks.

Thanks a lot in advance for your advice on this.

Regards,
Harikesh OP

john_wang
Support
Support

Hello @harikesh_1991 , copy @Dana_Baldwin ,

You’re absolutely right — a unique index does help improve performance.

  • If the table has a primary key, please will use that.
  • If there’s no primary key but a single unique index exists, that unique index will be used.
  • If there’s no primary key and multiple unique indexes exist, you can choose which one to use — otherwise, the first one (sorted alphabetically by name).

Hope this helps.

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!