Skip to main content
Announcements
Qlik Connect 2024! Seize endless possibilities! LEARN MORE
cancel
Showing results for 
Search instead for 
Did you mean: 
micheledenardi
Specialist II
Specialist II

Store procedures with very different storing time

Hi experts,

by using QlikSense June 2018 today i'm facing a problem about Store funcition.

I noticed a very important time difference between these two store approches:

By using this scritp my 120 QVDs (~16Gb) are stored in 50 minutues:

let numTables = NoOfTables();

for i=0 to $(numTables)-1

let tt = TableName(i);

Store $(tt) into $(vStoreQvdPath)$(tt).qvd (qvd);

next

let numTables = NoOfTables();

for i=1 to $(numTables)

let tt = TableName(0);

drop table [$(tt)];

next

By using the "classic" approach (with the same data) my 120 QVDs (~16Gb) are stored in only 7 minutes:

Store tab1 into [lib://BaseFolder/]tab1.qvd(qvd);

Drop table tab1;

Store tab2 into [lib://BaseFolder/]tab2.qvd(qvd);

Drop table tab2;

Store tab3 into [lib://BaseFolder/]tab3.qvd(qvd);

Drop table tab3;

.........

.........

.........

I would prefer to use the first approach but this difference in terms of "storing time" is very important and it's driving me to use the second approach.

Can anyone explain to me why exists this difference ?

Does exists a faster solution to store QVDs without explicit every tabname ?

Michele De Nardi
If a post helps to resolve your issue, please accept it as a Solution.
8 Replies
joseph_eftamand
Partner - Creator
Partner - Creator

The difference must be that in v2 you drop after you store. In v1 you store all THEN drop all.

Try dropping after storing all in the same loop.

micheledenardi
Specialist II
Specialist II
Author

I've already tried that way but nothing changes.

It seems that the "for" loop is slowing down the store.

Michele De Nardi
If a post helps to resolve your issue, please accept it as a Solution.
marcus_sommer

I don't think that the loop caused this delay. Of course a loop will add some overhead but usually it are just a few milliseconds which is quite signficantly by a very high number of iterations but by just 2 * 120 (store & drop) it should not very noticeable. Even if you assumed 1 second per iteration it would be just 4 minutes of delaying and not 43 minutes like in your description.

This means the reason must be elsewhere and it could be that there is just not enough RAM available so that a dropping of the tables within the store-loop might help. Further make sure that $(vStoreQvdPath) means the same folder like lib://BaseFolder. Also check that there no further actions to your server/machine and storage which might impact such a comparing.

- Marcus

micheledenardi
Specialist II
Specialist II
Author

Ciao Marcus,

the ram comsuption is always under 20% so i don't think the problem is on this parameter.

$(vStoreQvdPath) is the same folder of lib://BaseFolder.

I'm sure that there are no futher actions which can affect the server performance (we have a clustered structure and we are performing these test only on the reload note which has no other running task during these tests).


Michele De Nardi
If a post helps to resolve your issue, please accept it as a Solution.
maxgro
MVP
MVP

try this, 1 loop

let numTables = NoOfTables(); 

for i=$(numTables)-1 to 0 step -1 

     let tt = TableName(i);

     trace i=$(i) table $(tt) ;

     Store $(tt) into $(vStoreQvdPath)$(tt).qvd (qvd); 

     trace ***** store $(tt);

     drop table [$(tt)]; 

     trace ***** drop $(tt);

next 

marcus_sommer

In addition to the suggestion from Massimo you could add a time measurement before starting the loop and before and after the store- and drop-statements to see if there is anything noticeable.

While writing this I remember another issue by creating qvd's within a loop, see: Re: Increasing QVD sizes when loading and storing in a long loop but by just 120 iterations it seems rather unlikely to me that it caused such a delay.

- Marcus

micheledenardi
Specialist II
Specialist II
Author

Ciao Massimo,

your loop is faster than my intial loop (it takes 20 minutes to store my QVDs) but is not as fast as specify the store and drop for each table.

But 20 minutes are a great result compared to my 50mins!!!

Michele De Nardi
If a post helps to resolve your issue, please accept it as a Solution.
maxgro
MVP
MVP

Ciao

I tried with 3 tables,2 fields, 100M rows each table, 2.4GB qvd for 1 table

The script is below and the result (for loop vs store and drop) is

t1 (row-based qvd optimized) 100.000.000 Lines fetched

t2 (row-based qvd optimized) 100.000.000 Lines fetched

t3 (row-based qvd optimized) 100.000.000 Lines fetched

***** START 28/09/2018 18:11:32

***** END 28/09/2018 18:12:36

t1 (row-based qvd optimized) 100.000.000 Lines fetched

t2 (row-based qvd optimized) 100.000.000 Lines fetched

t3 (row-based qvd optimized) 100.000.000 Lines fetched

***** START 28/09/2018 18:14:47

***** END 28/09/2018 18:15:52

no difference in time elapsed (1:04 vs 1:05)

the second one is the for loop

//// test data

//

//let n = now();

//trace $(n);

//

//t1:

//noconcatenate load

// rand()*100,

// rand()*1000

//AutoGenerate 100000000;

//

//store t1 into t1.qvd (qvd);

//rename table t1 to t2;

//store t2 into t2.qvd (qvd);

//rename table t2 to t3;

//store t3 into t3.qvd (qvd);

//drop table t3;

//

//let n = now();

//trace $(n);

// test 1 store and drop

noconcatenate load * from t1.qvd (qvd);

noconcatenate load * from t2.qvd (qvd);

noconcatenate load * from t3.qvd (qvd);

let n = now();

trace ***** START $(n);

store t1 into t1.qvd (qvd);

drop table t1;

store t2 into t2.qvd (qvd);

drop table t2;

store t3 into t3.qvd (qvd);

drop table t3;

let n = now();

trace ***** END $(n);

// test 2 for loop

noconcatenate load * from t1.qvd (qvd);

noconcatenate load * from t2.qvd (qvd);

noconcatenate load * from t3.qvd (qvd);

let n = now();

trace ***** START $(n);

let numTables = 3;

for i=$(numTables)-1 to 0 step -1

    let tt = TableName(i);

//    trace i=$(i) table $(tt) ;

    Store $(tt) into $(tt).qvd (qvd);

//    trace ***** store $(tt);

    drop table [$(tt)];

//    trace ***** drop $(tt);

next

let n = now();

trace ***** END $(n);