Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi support,
Happy new year, I hope all is well for you guys. I have recently come across this issue with a client where there is some odd behaviour.
The setup is a logstream task from Oracle to Eventhubs in the dev environment. The odd behaviour is whenever a an update occurs on this one specific table in the task, consequently the updated columns + the primary key column become null on the target. It only happens for this one specific table and none of the others in the task.
I discovered in the logs this message. I also checked the Oracle limitations and found that Function-based indexes are not supported.
Screenshot within Oracle.
This lead me to assume that this strange behaviour is caused as Function-based index is enabled for the table. I will confirm whether this is the only table that has function-based index enabled. I just need someone to to confirm whether this is indeed the cause of this strange behaviour. In the meantime, I will also try to test it in my environment.
Any help with this is greatly appreciated.
Regards,
Mohammed
Hello Mohammed @MoeE ,
Happy New Year!
Would you please share with us the Oracle version and Replicate version information, and also the table creation DDL, we'd like to have a confirmation for you.
Regards,
John.
Hi John,
Oracle 19
QR November 2022
Is there an email or link I can use to securely provide the DDL? Thanks.
Regards,
Mohammed
Hello Mohammed, @MoeE ,
Please open a support ticket and provide the necessary information, we'd like to pick it up from there.
BTW, could you please confirm if the Oracle 19C is a Container installation , or non-Container? just want to know if LogMiner is supported or not.
thanks,
John.
Hello @MoeE ,
Per my investigation, the "null values in target" was not caused by Function-Based Indexes, instead, it's supplemental logging setting related issue.
If the table has not PK but has only Function-Based Indexes, then Qlik Replicate regard the table as no PK nor Unique Index, under this scenario all the columns should be added to supplemental logging if Kafka is the target endpoint. Qlik Replicate should add it automatically (with default setting) unless the Oracle source endpoint option "Automatically add supplemental logging" was turned off:
then DBA can add it manually, like:
| ALTER TABLE FBI ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS; |
The test table looks like:
|
CREATE TABLE FBI (ID INTEGER, NAME CHAR(20), NOTES VARCHAR2(200)); CREATE UNIQUE INDEX FBI_IDX ON FBI(UPPER(NAME)); |
Where "FBI" is the table name, "FBI_IDX" is the Function Based Index name.
After add the supplemental logging correctly, then all columns in Kafka target get not null values in CDC stage.
BTW, there are information to indicate the Function-Based Indexes is not supported in source_unload stage. It impacts the Full Load stage but no impact to CDC stage:
2024-01-20T12:33:19:855355 [SOURCE_UNLOAD ]I: Unique index FBI_IDX of the table 'SCOTT.FBI' cannot be used because the column 'SYS_NC00004$' is not supported (oracle_endpoint_table.c:1594)
(Test env: Qlik Replicate 2023.5 + Oracle 12c + Kafka 3.6)
Hope this helps.
John.
Hi John,
Thanks for helping out with the extra information on this. It was indeed helpful.
I have already checked that supplemental logging is enabled on all columns but perhaps it is best for me to double check that it is implemented correctly anyways.
Regards,
Mohammed
Hello @MoeE ,
Thanks for the update. Yes please. BTW, LogMiner option is still available if their Oracle installation is non-CDB mode, we can use pure Logminer script (our of Qlik Replicate) to cross-check what's the contents in the redo log file. anyway, looks to me it's not hard to figure out the root cause.
Good luck,
John.
Hi John,
Thanks, I've just suggested to the customer now to look through the redo logs to see if the entries are being created correctly. If there seems to be an issue there I will ask them to remove then recreate the supplemental logging then try again. I will update you on how this goes.
Regards,
Mohammed
Hi John,
Additionally we have discovered that this only happens when a transformation is added to the table. The transformation simply adds a column that contains the name of the source table. Removing this transformation fixes the issue yet the customer requires this column.
We have supplemental logging enable for all columns yet this transformation still produces null for the updated records.
any thoughts?
regards,
Mohammed
Hello team,
To add more to previous Tse Comments. supplemental logging enable for all columns helps replicate to identify the coloums or Row on the target Side. However, transformation works in different way as it happens on the Fly.
fetch Source info -- transformation -- Applied it on the target.
How to are going to fetch the Table name my lead to some info.
Reagrds,
Sushil Kumar