Skip to main content
Announcements
See what Drew Clarke has to say about the Qlik Talend Cloud launch! READ THE BLOG
cancel
Showing results for 
Search instead for 
Did you mean: 
sergio0592
Specialist III
Specialist III

Incremental reload issue

Hi all,

I'am facing an issue with incremental reload. So i have a fact table (planning) with  9,5 M rows (planning_22_03.qvd). I store it into .qvd file. Ok, it works.

But after i want an incremental reload where i add new data to former .qvd file (9,5 M). The result is that the new .qvd file (planning_22_03_V2.qvd) is 6,6 M.

I can't understand where are my 3M rows!!

Here my syntax of incremental reload. What goes wrong?

V_NB_DAYS is a variable where i define the number of days to reload, usuallly between 1 and 5 days.

PLANNING_TABLE :
SQL
select PLANNING.ID,
X,

X

from PLANNING P
where to_char(P.DT,'DD/MM/YYYY') >= to_date(sysdate-
$(V_NB_DAYS))
;

Concatenate
load *
from planning_22_03.qvd (qvd) where not exists (PLANNING.ID);

STORE PLANNING_TABLE into planning_22_03_V2.qvd;

Regards

1 Solution

Accepted Solutions
Peter_Cammaert
Partner - Champion III
Partner - Champion III

Your Primary Key field may be only part of your actual primary key (unique value for each and every row). If you only want to grow your target table, either omit the WHERE EXISTS() clause, or correct the Primary Key so that all required rows are loaded from the historical QVD.

At the moment, it seems to me that your CONCATENATE LOAD will only add:

  • PLANNING.IDs that do not exist yet in your initial Increment (as extracted from your database)
  • SINGLE INSTANCES of those PLANNING.IDs that do not exist yet in your INCREMENT, because as soon as you add the first copy with a specific value in PLANNING.ID, the next row in your historical QVD with an identical PLANNING.ID will be skipped.

The historical QVD should contain 9.5M rows, but only 6.5M of them have a unique PLANNING.ID vallue.

Peter

View solution in original post

5 Replies
vinieme12
Champion III
Champion III

Try with an Alias field, 


PLANNING_TABLE :
SQL
select PLANNING.ID,
X,

X

from PLANNING P
where to_char(P.DT,'DD/MM/YYYY') >= to_date(sysdate-
$(V_NB_DAYS))
;

Concatenate
load *,PLANNING.ID as PLANNING.ID1
from planning_22_03.qvd (qvd) where not exists (PLANNING.ID1,PLANNING.ID);  

//if this doesn't work interchange the positions(PLANNING.ID,PLANNING.ID1) , my memory isn't helping me

STORE PLANNING_TABLE into planning_22_03_V2.qvd;

Vineeth Pujari
If a post helps to resolve your issue, please accept it as a Solution.
sergio0592
Specialist III
Specialist III
Author

Thanks Vineeth for your answer but it don't works. I have always 6,6 M in planning_22_03_V2.qvd and with this syntax i have lost the optimized mode reload (equal if i dont'have .qvd ).

sasiparupudi1
Master III
Master III

Try two things

SQL to get the differential

PLANNING_TABLE :
SQL
select PLANNING.ID,
X,

X

from PLANNING P
where to_char(P.DT,'DD/MM/YYYY') >= to_date(sysdate-
$(V_NB_DAYS)

)

change it to

PLANNING_TABLE :
SQL
select PLANNING.ID,
X,

X

from PLANNING P
where Year(P.DT)=2017 and Month(P.DT)=3

so that if your where condition is working fine or not.


Second

Try the following

where to_char(P.DT,'DD/MM/YYYY') >= to_date(sysdate-'$(V_NB_DAYS)')


hth

Sas

sergio0592
Specialist III
Specialist III
Author

Thanks for your reply,  but it doesn't change anything. The problem is in " where not exists" because when i try to append rows with just concatenate, it's ok. I get 9,6M into planning_22_03_V2.qvd (100 000 rows reloaded since 5 days + 9,5M from qvd).

Look like an issue with my primary key PLANNING.ID but she's ok.

Peter_Cammaert
Partner - Champion III
Partner - Champion III

Your Primary Key field may be only part of your actual primary key (unique value for each and every row). If you only want to grow your target table, either omit the WHERE EXISTS() clause, or correct the Primary Key so that all required rows are loaded from the historical QVD.

At the moment, it seems to me that your CONCATENATE LOAD will only add:

  • PLANNING.IDs that do not exist yet in your initial Increment (as extracted from your database)
  • SINGLE INSTANCES of those PLANNING.IDs that do not exist yet in your INCREMENT, because as soon as you add the first copy with a specific value in PLANNING.ID, the next row in your historical QVD with an identical PLANNING.ID will be skipped.

The historical QVD should contain 9.5M rows, but only 6.5M of them have a unique PLANNING.ID vallue.

Peter