Skip to main content
Announcements
Have questions about Qlik Connect? Join us live on April 10th, at 11 AM ET: SIGN UP NOW
cancel
Showing results for 
Search instead for 
Did you mean: 
datanibbler
Champion
Champion

Make a Mapping-LOAD faster?

Hi,

I have a mapping-LOAD which contains (on two fields) two concatenations and a WHERE-clause with AND with two MATCH statements.

So I cannot make it optimized because it makes necessary a processing of the data.

I have tried speeding it up with two EXISTS() clauses instead, but that does not work.

So is there any possibility to speed this up at all?

The code looks like this:

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

MAP_Vorgänger:
mapping
load
VBELN&$(vSEP)&POSNN,
VBELV&$(vSEP)&POSNV
FROM
[filepath to qvd]
(
qvd)
where match(VBTYP_V,'C','G','K','L')
and   match(VBTYP_N,'5','6','M','N','S','U')
;

I will try with some IF and OR - might help, who knows. Maybe anyone knows something on how to do this?

Thanks a lot!

Best regards,

DataNibbler

1 Solution

Accepted Solutions
olivierrobin
Specialist III
Specialist III

hello

if you want your loadto be optimized, use ONE exists

in your case, I would try 1 load with exists on the most filtering category

then the mapping load from the previous table (with resident)

View solution in original post

7 Replies
olivierrobin
Specialist III
Specialist III

hello

if you want your loadto be optimized, use ONE exists

in your case, I would try 1 load with exists on the most filtering category

then the mapping load from the previous table (with resident)

marcus_sommer

You could use something like this:

t0:

load * inline [

VBTYP_V, VBTYP_N

C,5

G,6

K,M

L,N

,S

,U

];

t1:

load  VBELN,POSNN,VBELV,POSNV
FROM  [filepath to qvd]  (qvd) where exists(VBTYP_V);

     concatenate

load  VBELN,POSNN,VBELV,POSNV
FROM  [filepath to qvd]  (qvd) where exists(VBTYP_N);

MAP_Vorgänger:
mapping load
VBELN&$(vSEP)&POSNN,
VBELV&$(vSEP)&POSNV
resident t1;

drop tables t0,t1;

If this isn't fast enough you would need to transfer some steps like the filtering and/or string-concat into previous load-steps and/or applying an incremental approach to them.

- Marcus

datanibbler
Champion
Champion
Author

Hi,

yep, I will try. When I have the time, I will try out different possibilities and see if I can make that faster.

Thanks a lot!

Best regards,

DataNibbler

datanibbler
Champion
Champion
Author

@ Marcus

I am also thinking about possible savings by performing some transformations in the previous app - the transformation.qvw - and thereby speeding up the loading in the following datamodel.qvw - but since it is all run just once a day in the early morning as of now, there doesn't seem to be much potential in that - storing a large table takes so long here that doing some transformation in the transformation.qvw, storing a "special version" of a table and then just loading this optimized in the datamodel.qvw comes around to +- 0, more or less - if we had a faster server, there might be potential there ...

Best regards,

DataNibbler

marcus_sommer

It sounds that there is a network storage used for the data which is quite inappropriate to handle larger datasets - what speaks against using the storage from the server itself and/or updating/extending this storage with a ssd?

- Marcus

datanibbler
Champion
Champion
Author

Hmmm ... there is no network storage, we are using the internal storage of the server afaIk. Still, storing a large table takes quite a while - one example is a table with about 47 mio rows which takes approx. 3min to store. I have no comparison, but that does seem a lot. I don't know - is it long or is this what I should be expecting for such a big table?

When I store to my local Desktop instead to circumvent the network, it takes even longer, so that can't be it ...

Thanks a lot!

Best regards,

DataNibbler

marcus_sommer

It's a bit related to this and therefore I add here the link: Re: Detail question on STORE

I suggest that you take a look on the task-manager (tab performance - or even with the resource-monitor) during the storing because it will show how fast is your network and also how busy is the CPU because the bottleneck mustn't be mandatory the network/storage especially by very fast pci-ssd's.

- Marcus