For debugging purpoces you could try saving these sql statements into qvd files and then load these qvd files into your qvw. This is much faster than loading them everytime directly from the database.
I usually create two qvw files. One for loading and saving the data from the database into qvd file. The second qvw for the actual dashboard, in this file you load qvd files and create the data model.
Hope this helps.
I can't answer the original question - the internals of QlikView's memory usage can be quite unfathomable at times.
What I would suggest though is breaking this load up into chunks - persisting data to QVD and then bringing the QVD's together at the end. If performance is an issue then you could put each SQL statement into it's own QVD generator and run these one after the other. Once you have all of the raw QVD's built you can build another routine that does the joins. How you join them together can also have a big bearing - for example concatenates can often be quicker than joins, and optimised loads are always many many times quicker than non optimised.
You will find plenty of information on building a solid QVD strategy on-line. I have a number of blog posts on the subject on my own site that you may find useful.
Hope that helps,
By having separate QVD Generators and loading from QVD rather than a resident table you should be able to build it so that there is not a large resident table to drop? As each QVW is completed the memory will be freed. If you final presentation application just does optimised loads from QVD then all will be well. To achieve this you may need to add an extra step to the QVD generation routines to combine the first tier QVD's into a second tier QVD before finally loading it into the presentation layer.