Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello @john_wang @DesmondWOO @Dana_Baldwin ,
Hope you are doing good :). I have a query. We had recently deployed a parent(Logstream) task and its related child tasks in PRD. Source is Oracle ERP endpoint and the target is an Oracle DB. There are about 1200+ tables which are split into High Volume(HV), Medium Volume(MV) and Low Volume(LV) tasks. Almost all the 1200+ tables at source have unique indexes which I duly ensured that all these tables at target has unique indexes as well to ensure CDC functions effectively.
We had used the function operation_indicator('D', 'U', 'I') in global rule by adding a new column change_oper for all the 1200+ tables such that it does not issue a physical delete at the target when source deletes a row and rather have the same row at the target with 'D' in the change_oper column.
But there are many tables which are physically deleting the row inspite of adding this function. When I checked the Qlik guidelines, I see the below:
I have a query, when Qlik mentions about primary key, is it only for the target tables, or does the source table also need primary keys? In this case, all the source tables have only unique index and also since its an ERP environment , adding primary keys at source is not an option. Will this work if the source tables has unique index and if I promote the existing unique indexes to primary keys at the target?
Besides we have also enabled the below in all the child task to ensure duplicates are not replicated(upsert mode):
Will changing this to "Ignore Record" cause duplicates? Is just having "Duplicate key when applying INSERT: UPDATE the existing target record" sufficient to ensure no duplicates are replicated?
Any insights on this would help us to resolve the issue.
Thanks a lot in advance.
Regards,
Harikesh OP
Hi All,
Just checking if anyone had a chance to look at my question above. I’d really appreciate any guidance or clarification when you get a moment.
Thanks in advance!
Hello @harikesh_1991 ,
You have already identified the root cause in your description. This configuration is not supported when Soft Delete is enabled together with the following setting UPSERT mode:
Feel free to let me know if I missed anything.
Thanks,
John.
Hello @john_wang ,
Thank you for the clarification on the Apply Conflicts. However could you also advise on the primary key query mentioned above?
When Qlik mentions about primary key, is it only for the target tables, or does the source table also need primary keys present? In this case, all the source tables have only unique index and and its not possible to add primary keys for the source tables. Will this work if the source tables has only unique index and if I promote the existing unique indexes to primary keys at the target?
Regards,
Harikesh OP
Hello Harikesh OP, @harikesh_1991
In this case, all the source tables have only unique index and and its not possible to add primary keys for the source tables
If a table does not have a primary key (PK) but has a unique index, the UI configuration is sufficient.
Please note the priority rules for key selection:
Hope this helps.
John.
Hello @john_wang ,
My query is solely on using the operation_indicator function for the soft delete approach.
In the qlik help pages ,it states as the below.
Hence I wanted to know if primary key is needed for both the source and target tables to ensure the soft delete works fine or will just having a unique index for the source and target tables be sufficient for it to work?
Sorry I was not sure if the earlier response that you had provided was indeed for the soft delete approach (operation_indicator function) or not.
Regards,
Harikesh OP
Hello Harikesh OP, @harikesh_1991
Yes, I’m addressing the concerns raised in your previous comments. Let me summarize:
I hope this clarifies your concerns.
John.
Hello @john_wang ,
Even after the source having unique index and the target having the same columns as primary key, I find an example of a table not capturing the soft deletes instead doing a physical/hard delete at the target.
I have ensured that I changed the task settings for the below to "Ignore Record":
However even after doing all this, I see there are records which are getting physically deleted instead of being captured.
I was able to find this out because from the target tables of our tasks, another process runs which will replicate these update/inserts and deletes on to another table in a different schema which does based on the change_oper column used for soft delete. Now since the deletes are occurring physically in our target table, this is causing duplicates in the final target table.
I have attached the logs and also the screenshot of the records being physically deleted.
Any advise or resolution for this issue would be much appreciated.
Regards,
Harikesh OP
Hi All,
Just checking if anyone has had a moment to look into the above query. I'd really appreciate any guidance or clarification on this as it would help to understand why the deletes are not being captured as soft deletes.
Regards,
Harikesh OP
Hello Harikesh OP, @harikesh_1991
I’ll check it out later for you—just tied up with another issue at the moment.
Regards,
John.