Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Qlik and ServiceNow Partner to Bring Trusted Enterprise Context into AI-Powered Workflows. Learn More!
cancel
Showing results for 
Search instead for 
Did you mean: 
harikesh_1991
Contributor III
Contributor III

Soft Delete function not working as expected after using it in global rules for all my child tasks

Hello @john_wang @DesmondWOO @Dana_Baldwin ,

Hope you are doing good :). I have a query. We had recently deployed a parent(Logstream) task and its related child tasks in PRD. Source is Oracle ERP endpoint and the target is an Oracle DB. There are about 1200+ tables which are split into High Volume(HV), Medium Volume(MV) and Low Volume(LV) tasks. Almost all the 1200+ tables at source have unique indexes which I duly ensured that all these tables at target has unique indexes as well to ensure CDC functions effectively.

We had used the function operation_indicator('D', 'U', 'I') in global rule by adding a new column change_oper for all the 1200+ tables such that it does not issue a physical delete at the target when source deletes a row and rather have the same row at the target with 'D' in the change_oper column.

But there are many tables which are physically deleting the row inspite of adding this function. When I checked the Qlik guidelines, I see the below:

  • The operation_indicator function is not supported on tables that do not have a Primary Key.
  • This function is not supported when:
    • The Apply Conflicts error handling policy is set to No record found for applying an update: INSERT the missing target record.
    • The Apply changes using SQL MERGE task setting is enabled.

I have a query, when Qlik mentions about primary key, is it only for the target tables, or does the source table also need primary keys? In this case, all the source tables have only unique index and also since its an ERP environment , adding primary keys at source is not an option. Will this work if the source tables has unique index and if I promote the existing unique indexes to primary keys at the target?

Besides we have also enabled the below in all the child task to ensure duplicates are not replicated(upsert mode): 

  • The Apply Conflicts error handling policy is set to No record found for applying an update: INSERT the missing target record.

Will changing this to "Ignore Record" cause duplicates? Is just having "Duplicate key when applying INSERT: UPDATE the existing target record" sufficient to ensure no duplicates are replicated?

Any insights on this would help us to resolve the issue. 
Thanks a lot in advance.


Regards,
Harikesh OP




Labels (3)
12 Replies
john_wang
Support
Support

Hello Harikesh OP, @harikesh_1991 

The screenshots and task log file don’t provide enough information to understand the behavior.

I’m also not sure whether your task is using the operation_indicator() function. If it is, then it appears you are running with a non-supported configuration. As mentioned earlier:

“You have already identified the root cause in your description. This configuration is not supported...”

Additionally, please avoid including task log files in public articles, as they are visible to all users. (and I'm deleting it now.)

If you need further assistance, I recommend opening a support ticket and attaching the task Diagnostics Package, we need to check your task settings. Our support team will be happy to help.

Regards,

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
harikesh_1991
Contributor III
Contributor III
Author

Sure @john_wang . No problem. Thanks for the update. Please feel free to take a look when you get a chance .

Regards,
Harikesh OP

john_wang
Support
Support

Hello Harikesh OP, @harikesh_1991 

Please check my previous comment.

Thanks,

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!