Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
I'm facing a strange scenario. We're loading updates from an Oracle database using logminer to an Azure Event Hub.
I have a task with several monitored tables:
ADDRESS
etc.etc.
The ADDRESS table has a field called "AddressType" which is used in a filter statement to ensure that only certain values trigger updates.
This works fine most of the time, but sometimes (like when the server is rebooted) the column in question is always null, which makes the filter fail. Restarting the task usually fixes it.
This is truly bizarre. Has anyone heard of anything like this happening?
What is the full version number of Replicate that you're using? We might need a support case to look into it in more detail, but with the version we can at least check documentation for later releases to see if this is a known issue.
Thanks,
Dana
Thanks for the reply. Looks like we're missing a few updates with potential Oracle fixes, so we're going to try that route first.
No one else is seeing anything like this???
I've gone up to May 2024 release and have done a ton of troubleshooting with support with no resolution as of yet. They're struggling to reproduce the issue sadly.
Hello @eblackstonegesa ,
Sorry to hear about that. Would you please share the support ticket number? We'd like to review the case again with support team.
Regards,
John.
Hi John! Sent you a pm with the case number. Thanks!
Hello @eblackstonegesa ,
Thanks for the information. Please allow me some time, I will get back to you shortly with my findings.
Regards,
John.
Hello @eblackstonegesa ,copy @shashi_holla ,@Dana_Baldwin
I think I’ve found something in the attachments. If my understanding is correct, the issue occurs when the ADDRUSECD column sometimes has a NULL value.
1. The problem is unrelated to the table’s supplemental logging settings, so increasing the table supplemental logging level cannot solve the issue.
2. The issue is caused by the computation of the ADDRUSECD column. This is evident from both the JSON file and the task log file. Specifically, in the exported JSON file:
in task log file "issue reproduced.log" line #190149:
2025-02-14T12:31:03:770492 [TRANSFORMATION ]T: New column 'ADDRUSECD', type: 'kAR_DATA_TYPE_STR' (manipulator.c:1490)
2025-02-14T12:31:03:770492 [TRANSFORMATION ]T: Column 'address.postal-address-updated.ADDRUSECD' will not be replicated homogeneously as it was added using a transformation (manipulator.c:1545)
2025-02-14T12:31:03:770492 [TRANSFORMATION ]T: Transformation expression is 'source_lookup('NO_CACHING','PARTNERAPI','VIEW_GETADDRINFOBYADDRNBR','ADDRUSECD','ADDRNBR=:1',$ADDRNBR)' (manipulator.c:713)
3. I’m not sure about the task design and replication logic, but this does not seem like a common scenario:
4. If the logic is intended to retrieve another column’s value, please ensure that the query against "VIEW_GETADDRINFOBYADDRNBR" does not return NULL rows.
I'm glad to be working with Shashil on this case. Please don't hesitate to reach out if you need any additional information.
Good luck,
John.
Thanks for taking a look. The problem with that finding is that ADDRUSECD is actually being filtered in record selection condition. It must match a list of possible codes in every table defined in that task.
The fact that these records are being passed to my target system proves that the column isn't actually null from the source lookup. It's getting blanked out inappropriately sometime after that. This seems like some kind of bug in building the target message.
I was going to try swapping from the Azure Event Hub target to a Kafka target, but it doesn't look like our Qlik license allows Kafka endpoints strangely enough.