Skip to main content
Announcements
NEW: Seamless Public Data Sharing with Qlik's New Anonymous Access Capability: TELL ME MORE!
cancel
Showing results for 
Search instead for 
Did you mean: 
eduardo_dimperio
Specialist II
Specialist II

How best way to make a incremental qvd?

Hi,

I have a doubt about best way to make a incremental qvd:

1-

MSG_ALR:

LOAD DISTINCT

    ID_MSG_REC,

    DT_REC,

    MODULE_ID as ID_MODULE,

    SubField(MSG, ',' ,5) AS OCORRENCIA

resident CM_MSG_REC WHERE SubField(MSG, ',' ,5)='BEGIN';

DROP TABLE CM_MSG_REC;

MSG_ALR:

LOAD

*

From

[lib://Dados/MI/MSG/CM_MSG_REC_ALR.qvd] (qvd);

STORE MSG_ALR INTO [lib://Dados/MI/MSG/CM_MSG_REC_ALR.qvd] (qvd);

2-

MSG_ALR:

LOAD DISTINCT

    ID_MSG_REC,

    DT_REC,

    MODULE_ID as ID_MODULE,

    SubField(MSG, ',' ,5) AS OCORRENCIA

resident CM_MSG_REC WHERE SubField(MSG, ',' ,5)='BEGIN';

DROP TABLE CM_MSG_REC;

CONCATENATE(MSG_ALR)

LOAD

*

From

[lib://Dados/MI/MSG/CM_MSG_REC_ALR.qvd] (qvd);

STORE MSG_ALR INTO [lib://Dados/MI/MSG/CM_MSG_REC_ALR.qvd] (qvd);

But if have a third option, I'll glad to hear about it.

Thank you

6 Replies
OmarBenSalem

Incremenatl Load logic :

I Have a large source : let's say 20 millions line :

1) Let's import this and store it in a qvd:

Table:

load * from source;

store Table into table.qvd(qvd);

2) Now, that we store these 20 millions in the qvd; we comment the previous script;

3) the next day, we have 21 millions lines in our source; instead of importing the whole 21 millions from our database:

a) Import the 20 millions already stored in the qvd

b) from the source database, we import only the new 1 million lines

=>

Table:

Load * from Table.qvd(qvd);

concatenante

load * from source where not exists (UniqueID); we just import the lines that do not exist in the QVD.

Hope this was clear?

zebhashmi
Specialist
Specialist

Hi - what about making L1 QVD first from TABLE CM_MSG_REC

and then L2 QVD transforming form L1 QVD

Regards

eduardo_dimperio
Specialist II
Specialist II
Author

Hi Omar,

thank you for the explanation, but why the result in both cases are the same?

eduardo_dimperio
Specialist II
Specialist II
Author

Hi Jaha,

It works, same result as 1 or 2.

I want to know if 1 and 2 have the same result , what's best.

eduardo_dimperio
Specialist II
Specialist II
Author

Ah another import point about Exists(), that only works if i not laod in memory that IDs before, unfortunately not the case.

zebhashmi
Specialist
Specialist

Hi- yes the results are same. but if data is big, the optimized load will perform better in loading the data.

Thank you!