Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Join us to spark ideas for how to put the latest capabilities into action. Register here!
cancel
Showing results for 
Search instead for 
Did you mean: 
MoeE
Partner - Specialist
Partner - Specialist

Source_lookup incorrect values

Hi,

I have a question about a source lookup that sometimes returns the wrong date. This is a direct task is from Oracle to MS SQL Server. The goal is to grab some data from the main table and insert it into the other tables using a transformation. This works however sometimes it returns with '1753-01-01 00:00:00.0000000' which is the minimum value for Microsoft SQL Server.

This is the statement: 

source_lookup(100000,'ATU_DATA','TRN511','TRN511_PSTDAT','TRN511_ENTTYP=:1 AND TRN511_ENTIDE=:2 AND TRN511_TRCSEQ=:3',$TRN512_ENTTYP,$TRN512_ENTIDE,$TRN512_TRCSEQ)

To give more potentially useful context, stopping and resuming the task fixes this behaviour.

Does anyone know the reasons or examples as to why this behaviour could occur?

Kind regards,

Moe

 

Labels (3)
9 Replies
john_wang
Support
Support

Hello @MoeE ,

Thanks for reaching out to Qlik Community!

If simply stopping and resuming the task fixes this behavior, I'm guessing it's caused by TTL (Time To Live) in the function. Currently it was set to 10,000 seconds = 27.8 days, would you set it to a lower value (eg 600 seconds - just for troubleshooting purpose) to see if the problem can be repressed?

If the problem persists, let's get a Verbose task log file with SOURCE_CAPTURE enabled, please decrypt the task log file before you upload it to case, and with the table DDL (include PK/Unique Index formation).

Hope this helps.

Regards,

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
john_wang
Support
Support

Hello @MoeE ,

BTW, please confirm if the PK/Unique Index values were changed during the CDC stage.

Regards,

John.

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
Heinvandenheuvel
Specialist III
Specialist III

Hmm, what was the latency at the time of the failed lookup? Seconds, or hours?

Could one of the lookup key columns, for example entity sequence have been updates since the original (insert?) change record and that subsequent  has not come through yet? Such issue would resolve itself once the subsequent update gets processed.

To investigate you might need to run with 'store changes' for a while, trying  to get a timeline for the row with the 'row not found' date.

to mitigate you may want a 'case' around the source lookup and return your own 'special' date if the lookup returns a null. '1900-01-01' or whatever - something you recognize vs some standard problem value.

And indeed a  TTL of 100,000 = just over a day, would seem to defeat the purpose. Those keys may come back for a day, but why not look them up every so often, like 1000 seconds (20 minutes approx).

Hein.

MoeE
Partner - Specialist
Partner - Specialist
Author

Hi John, 

Thanks for the help, I will update you soon. I also have one unrelated question. If I enable store verbose logging in memory and an error occurs, will the lines preceding the errors also appear in the logs? Will only the lines after the error occurs be inserted into the logs? This is because I want to see what happens before and after the error occurs. Thanks. 

MoeyE_0-1701210708985.png

 

Regards,

Mohammed

MoeE
Partner - Specialist
Partner - Specialist
Author

Hi Hein,

I appreciate the response. I'm a bit confused by the mechanics of this first scenario and how it results in the date value we see.

Could one of the columns, for example entity sequence have been updates since the original (insert?) change record and that subsequent  has not come through yet? Such issue would resolve itself once the subsequent update gets processed.

My thinking is that the lookup will always grab some value from the source and if the latest change hasn't come in then it will simply show the previous value. So I'm not sure the scenario in which it can grab the default date instead. The rest of these points makes sense to me, thank you.

Regards,

Mohammed

Michael_Litz
Support
Support

Hi Mohammad,

I would try with no caching at all and force the lookup to happen immediately.

Please use 'NO_CACHING' as the TTL (first parameter)

 

Does that help out?

Thanks,
Michael

Heinvandenheuvel
Specialist III
Specialist III

>>> My thinking is that the lookup will always grab some value from the source and if the latest change hasn't come in then it will simply show the previous value. 

Correct - UNLESS some of the lookup key values changed (TRN512_ENTTYP,  TRN512_ENTIDE, TRN512_TRCSEQ)

That can potentially result in a row (no longer) found on source, with some bad luck on the timing, and the row not found will make the lookup return a null or low value.

EDIT: While timing can play a role for multiple changes  to the lookup columns,  any specific update which changes the lookup columns should work with the current, new, values for those columns and contrary to what i wrote the BI__ columns will in fact fail because those old values are update. Sorry.

If you think any of those values changed you should look up  "Using a column's before-image data in a transformation" in the documentation and use those column's before-image data in a transformation. Of course that will only get you over a single change hurdle, not multiple. To do this, you might try to specify the source column name in Output table's Expression column, in the following format: $BI__MyColumn

here:  $BI__TRN512_ENTTYP, $BI__TRN512_ENTIDE, $BI__TRN512_TRCSEQ

EDIT: However, the above will always make the simple update which changed any of those columns fail as the old (BI) values are updated already on the source.

@Michael_Litz "I would try with no caching at all and force the lookup to happen immediately." I appreciate the thought but I don't think it will help my suggested scenario.

Hein.

MoeE
Partner - Specialist
Partner - Specialist
Author

Hi John,

Looking into the case now this looks like it may be a possibility. Is a changing of the PK a possible factor in an issue like this?

Thanks,

Mohammed

Heinvandenheuvel
Specialist III
Specialist III

>> t may be a possibility. Is a changing of the PK a possible factor in an issue like this?

Very likely. That's what I told you in two replies already. The lookup is using column values which have been changed already on the source, but that change has not made it to the same batch as the current change.

Extending the apply window sizes may mitigate some, reduce the chances, if Replicate manages to see the original insert plus change for example  but it will not fix it for all.  As per my first reply you could use the store-changes table to try to capture the transitions  for the row in question. If you can temporarily change the target table the n you could also try to use a transformation to add the BI__ values for the columns in the lookup to the target row and hopefully get a clue from there - tricky!

Hein.