Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello,
I have a strange behaviour in an old qlikview document.
This document loads, with 5 sql select, the order-details and concatenate the results.
It is taking long time for each query(about 30 minutes for each one).
Then it makes a left join with another select (very fast).
The next step is storing in a qvd file (50M) and drop the resident table.
Between the left join and the store the RAM growth:
- in a server with qlikview desktop version 8.5 it use about 24G on 60G
- in the new server with qlikview desktop version 11 it use about 36G on 32G availble (so it is paging)
After the drop the RAM is not released and the following operation are very slow.
So
1) why in the new server takes more memory than the older?
2) I suppose it is making a cross join otherwise i don't understand the 50M -> 20G(or 36G), correct?
3) is it possible to release the RAM during the reload, and how?
Thanks
PS the script schema is like
tb1:
sql select 1
tb2:
sql select 2
concatenate sql select 3
concatenate sql select 4
concatenate sql select 5
concatenate load resident tb1
drop tb1
left join sql select 6
store tb2
drop tb2
....other
Now I'm trying to split the document in two files...
Hey,
For debugging purpoces you could try saving these sql statements into qvd files and then load these qvd files into your qvw. This is much faster than loading them everytime directly from the database.
I usually create two qvw files. One for loading and saving the data from the database into qvd file. The second qvw for the actual dashboard, in this file you load qvd files and create the data model.
Hope this helps.
gr.
Frank
hi
try to use left keep.
that is replace left join by left keep.
note--
some times in left join, run the infinite loop.
for resolving this, try to make composite key in the table..
Hi,
I can't answer the original question - the internals of QlikView's memory usage can be quite unfathomable at times.
What I would suggest though is breaking this load up into chunks - persisting data to QVD and then bringing the QVD's together at the end. If performance is an issue then you could put each SQL statement into it's own QVD generator and run these one after the other. Once you have all of the raw QVD's built you can build another routine that does the joins. How you join them together can also have a big bearing - for example concatenates can often be quicker than joins, and optimised loads are always many many times quicker than non optimised.
You will find plenty of information on building a solid QVD strategy on-line. I have a number of blog posts on the subject on my own site that you may find useful.
Hope that helps,
Steve
Thanks all, I'll split the concatenated sql select into sql select -> qvd and then concatenate them as you said.
But, the drop resident table doesn't release the memory?
By having separate QVD Generators and loading from QVD rather than a resident table you should be able to build it so that there is not a large resident table to drop? As each QVW is completed the memory will be freed. If you final presentation application just does optimised loads from QVD then all will be well. To achieve this you may need to add an extra step to the QVD generation routines to combine the first tier QVD's into a second tier QVD before finally loading it into the presentation layer.
-Steve