Skip to main content
Announcements
Qlik Connect 2024! Seize endless possibilities! LEARN MORE
cancel
Showing results for 
Search instead for 
Did you mean: 
sunny_talwar

Incremental Load on SQL Joined Tables

Hi Friends -

I have done incremental loads few times before, but this time I have a slightly different scenario. There are these pre-existing application which runs every single day to populate millions of rows. All these application uses some kind of SQL query which joins multiple tables from the data source.

Now what I am trying to do is to implement an incremental reload to this so that we can reduce the overall reload time but in order to do that I have few options

  1. Somehow implement the incremental reload to the complicated query, which I will have to to think about because each of the tables will have there UniqueID and might have a different modified date.
  2. Create incremental qvds for each of the tables separately and then perform all the joins in QlikView.

Has anybody worked on this kind of scenario before? Or would anybody be able to give advice here?

Thanks,

Sunny

marcowedel‌, swuehl‌, mrkachhiaimp‌, jagan‌, marcus_sommer‌, swuehl‌, maxgro‌, rwunderlich‌, gwassenaar‌, hic

10 Replies
adamdavi3s
Master
Master

This might be too obvious but can't you just pull the maxmodified date for each row in the select and use that?

So what I mean is take the max of the  max modified date from the individual tables and use that as your trigger?

This assumes that if one of the tables has been updated then you want to pull the row in

sunny_talwar
Author

adamdavi3s‌ -

First of all thanks for your response and let me assure there is nothing obvious when you are not thinking straight.

Coming back to your response, I am sure there must be a way to figure this out on the already joined table (max modified date is one issue and unique identifier is another one). But what I am trying to understand is if it makes sense to use one way or the other. I know option 2 is easy to implement, but do I loose on performance? or am I getting too worried about option 1 when in fact it might not be that difficult?

Best,

Sunny

nagarjuna_kotha
Partner - Specialist II
Partner - Specialist II

Yes, we are doing the same here

The scenario looks same but we are using SAP

I have incremental for individual transaction tables and those are using in QV

-Nagarjun

sunny_talwar
Author

So you guys are using option number 2? Have you thought of using option 1?

adamdavi3s
Master
Master

haha tell me about it, we just had our second child and sleep deprivation is killing me

Personally I would take option 1 purely because it is probably easier to adapt the query than re-write the app and add more qvd generation in?

You;re right though, as a qlikview dev option 2 is easier to implement and understand!

I don't know, as you say either would work but my gut would be to go with the first option. In this case does the unique_id matter?

All you need to know is that something has changed and you need to reload that row, which you could do easy enough with a CTE at the top of the query....

nagarjuna_kotha
Partner - Specialist II
Partner - Specialist II

But Some tables does have date to filter like BSEG, mSEG

For those we are using Sub Queries to those table like match records from another tables Like BSEG in BKPF MSEG in MKPK like that.

I hope this may helps

_nagarjun

tresesco
MVP
MVP

For option 1, I would think like:

Anyway you would not be interested on the other records of the individual tables that are getting excluded by join condition. Therefore checking the time stamps in individual tables might not be a good idea. May be you can fetch time stamps fields from all the tables while joining and implementing incremental load on the resultant(post join) table .

sunny_talwar
Author

That is a very good point. So essentially option 2 is a risk strategy and I should focus on option 1 then, right? Did I read your comment right?

marcus_sommer

I'm not sure if it is so easy and one version will be better perform then the other. It could be that you need to combine both approaches.

If you do the incremental approach only on the qlik site you will need to load all records from the database which might take some time. Further qlik isn't quite so flexible by joins like in sql (you might need some additional steps) and the performance from qlik by joins (and group/oder by loadings) isn't mandatory better than in a database.

I could imagine that it might be useful to load all uniqueID's from a table into qlik and filter them with a where not exists against already in a qvd saved uniqueID's and creating from the remaining ID's a list which could be used as where-clause against the final data-load from the database.

I think you will need to check various combinations to find those solution which works best with your requirement.

- Marcus