Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi,
I have a mapping-LOAD which contains (on two fields) two concatenations and a WHERE-clause with AND with two MATCH statements.
So I cannot make it optimized because it makes necessary a processing of the data.
I have tried speeding it up with two EXISTS() clauses instead, but that does not work.
So is there any possibility to speed this up at all?
The code looks like this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
MAP_Vorgänger:
mapping
load
VBELN&$(vSEP)&POSNN,
VBELV&$(vSEP)&POSNV
FROM
[filepath to qvd]
(qvd)
where match(VBTYP_V,'C','G','K','L')
and match(VBTYP_N,'5','6','M','N','S','U')
;
I will try with some IF and OR - might help, who knows. Maybe anyone knows something on how to do this?
Thanks a lot!
Best regards,
DataNibbler
hello
if you want your loadto be optimized, use ONE exists
in your case, I would try 1 load with exists on the most filtering category
then the mapping load from the previous table (with resident)
hello
if you want your loadto be optimized, use ONE exists
in your case, I would try 1 load with exists on the most filtering category
then the mapping load from the previous table (with resident)
You could use something like this:
t0:
load * inline [
VBTYP_V, VBTYP_N
C,5
G,6
K,M
L,N
,S
,U
];
t1:
load VBELN,POSNN,VBELV,POSNV
FROM [filepath to qvd] (qvd) where exists(VBTYP_V);
concatenate
load VBELN,POSNN,VBELV,POSNV
FROM [filepath to qvd] (qvd) where exists(VBTYP_N);
MAP_Vorgänger:
mapping load
VBELN&$(vSEP)&POSNN,
VBELV&$(vSEP)&POSNV
resident t1;
drop tables t0,t1;
If this isn't fast enough you would need to transfer some steps like the filtering and/or string-concat into previous load-steps and/or applying an incremental approach to them.
- Marcus
Hi,
yep, I will try. When I have the time, I will try out different possibilities and see if I can make that faster.
Thanks a lot!
Best regards,
DataNibbler
@ Marcus
I am also thinking about possible savings by performing some transformations in the previous app - the transformation.qvw - and thereby speeding up the loading in the following datamodel.qvw - but since it is all run just once a day in the early morning as of now, there doesn't seem to be much potential in that - storing a large table takes so long here that doing some transformation in the transformation.qvw, storing a "special version" of a table and then just loading this optimized in the datamodel.qvw comes around to +- 0, more or less - if we had a faster server, there might be potential there ...
Best regards,
DataNibbler
It sounds that there is a network storage used for the data which is quite inappropriate to handle larger datasets - what speaks against using the storage from the server itself and/or updating/extending this storage with a ssd?
- Marcus
Hmmm ... there is no network storage, we are using the internal storage of the server afaIk. Still, storing a large table takes quite a while - one example is a table with about 47 mio rows which takes approx. 3min to store. I have no comparison, but that does seem a lot. I don't know - is it long or is this what I should be expecting for such a big table?
When I store to my local Desktop instead to circumvent the network, it takes even longer, so that can't be it ...
Thanks a lot!
Best regards,
DataNibbler
It's a bit related to this and therefore I add here the link: Re: Detail question on STORE
I suggest that you take a look on the task-manager (tab performance - or even with the resource-monitor) during the storing because it will show how fast is your network and also how busy is the CPU because the bottleneck mustn't be mandatory the network/storage especially by very fast pci-ssd's.
- Marcus