Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi, I am currently using two seperate loads to create my final QVD's. My system is like this:
LOAD 1
load database 1 where x applies, save to temp1.qvd
load database 1 where y applies, save to temp 2.qvd
load database 1 where z applies, save to temp 3.qvd
LOAD 2
Leftjoin temp1, temp 2, temp 3 Save to data.qvd
My question is this, at the moment I have two seperate files, one for LOAD 1 and then another for LOAD 2 once load 1 has completed. Is there an easier way to set this up, so I have say one file that does both parts?
Many thanks
Ben
Hi Ben,
From a quick look at your scripts I would say that the simple answer to your question is to issue a drop statement after each of the save statements. This will save memory and QlikView wasting effort building synthetic keys it doesn't need.
Budget:
CROSSTABLE (Period, [Budget], 7)
LOAD
GBCO as [Company],
GBAID as [Account ID],
[...]
GBAN12/100 as 12,
GBAN13/100 as 13;
SQL SELECT *
FROM TRDTA.F0902 WHERE "GBLT" = 'BA' AND "GBFY" = '9';
Store Budget Into Data/Temp/1.qvd;
DROP TABLE Budget;
You would then be able to do the loads from QVD with joins (as you have it in the second file) in the same script as the first without any problems.
However..... I would reccomend you keep your QVD building in a separate application to your analysis anyway. This is best practice, and it allows you to do multiple reloads of your front end document without going back to the source data. Also, I tend to build many QVD generation applications - so that they can be run independently. If you run QV Server 9 SR1 or above you can schedule one to run after the other anyway.
Furthermore, I would try doing the loads from QVDs into a single table with a CONCATENATE statement rather than a JOIN statement. I would imagine that this will run much quicker. It will give you many more rows in the resultant table, with a lot of the values being NULL, but these null values will be ignored when you do analysis.
Another thing you may want to consider is doing is putting the concatenate in at the QVD generation phase. The reason I suggest this is that this part of the load will be relatively slow anyway, and the concatenate will not be a big hit. When loading from QVD if you do not do any manipulation on the data it will load many times faster (search for 'QVD optimized' for details). If you have the requirement to only pull out certain types of data from the QVD stamp rows of each type with an identifier, thusly:
MainTable:
CROSSTABLE (Period, [Budget], 😎
LOAD
'Budget' as RowType,
GBCO as [Company],
GBAID as [Account ID],
[...]
CONCATENTATE
CROSSTABLE (Period, [Actual Spend], 😎
LOAD
'Actual' as RowType,
[...]
STORE MainTable INTO....
Sorry, that is probably far more detail than you were looking for, but I hope it is useful to you. Please note I have not tried out any of my code snippets, so they may need a bit of tweaking.
Cheers,
Steve
Hi Ben,
The best approach depends on exactly what you are trying to achieve. The thing that jumps out is that you could just do:
load database 1 where (x or y or z) applies, save to data.qvd
But I presume it is not that straight forward?
If the data structure of all three temp QVDs are the same a concatenated load of all three QVDs into a single table and then a save would be very efficient, as an optimised QVD load could be performed.
If you can attach the actual load scripts rather than psuedo code then it will be a bit easier to suggest a solution.
Cheers,
Steve
Hi Steve, apologies, I think the two load scripts will make it clearer, it is somewhat complicated. The reason that the files have to be saved as sepeerate qvd's is that I want to be able to treat each of the selected types as a seperate column in the final qvw. Hope this helps!
Thankyou for the help!
Cheers
Ben
Here is the 2nd part of the load
Hi Ben,
From a quick look at your scripts I would say that the simple answer to your question is to issue a drop statement after each of the save statements. This will save memory and QlikView wasting effort building synthetic keys it doesn't need.
Budget:
CROSSTABLE (Period, [Budget], 7)
LOAD
GBCO as [Company],
GBAID as [Account ID],
[...]
GBAN12/100 as 12,
GBAN13/100 as 13;
SQL SELECT *
FROM TRDTA.F0902 WHERE "GBLT" = 'BA' AND "GBFY" = '9';
Store Budget Into Data/Temp/1.qvd;
DROP TABLE Budget;
You would then be able to do the loads from QVD with joins (as you have it in the second file) in the same script as the first without any problems.
However..... I would reccomend you keep your QVD building in a separate application to your analysis anyway. This is best practice, and it allows you to do multiple reloads of your front end document without going back to the source data. Also, I tend to build many QVD generation applications - so that they can be run independently. If you run QV Server 9 SR1 or above you can schedule one to run after the other anyway.
Furthermore, I would try doing the loads from QVDs into a single table with a CONCATENATE statement rather than a JOIN statement. I would imagine that this will run much quicker. It will give you many more rows in the resultant table, with a lot of the values being NULL, but these null values will be ignored when you do analysis.
Another thing you may want to consider is doing is putting the concatenate in at the QVD generation phase. The reason I suggest this is that this part of the load will be relatively slow anyway, and the concatenate will not be a big hit. When loading from QVD if you do not do any manipulation on the data it will load many times faster (search for 'QVD optimized' for details). If you have the requirement to only pull out certain types of data from the QVD stamp rows of each type with an identifier, thusly:
MainTable:
CROSSTABLE (Period, [Budget], 😎
LOAD
'Budget' as RowType,
GBCO as [Company],
GBAID as [Account ID],
[...]
CONCATENTATE
CROSSTABLE (Period, [Actual Spend], 😎
LOAD
'Actual' as RowType,
[...]
STORE MainTable INTO....
Sorry, that is probably far more detail than you were looking for, but I hope it is useful to you. Please note I have not tried out any of my code snippets, so they may need a bit of tweaking.
Cheers,
Steve
Hi Steve,
thanks very much for that, the drop table command has helped no end. However this brings me onto another issue, now that I can create the QVD files I am finding when I load them to do analysis the load is timing out, also I end up with 3 synthetic tables. Have you got any further advice as to how to reduce the number of synthetic tables or improve the reading of the qvd's for analysis? At the moment I keep running out of RAM (not all the time, just sometimes), I've got 4gb in this pc and have no access to any more, any idea how to resolve this? Sorry to be plaguing you with questions! All help is much appreciated! Please see the new improved load qvd and reader qvd attached.
Cheers
Ben
Here is that new improved create qvd file, are these the changes you were suggesting?
Cheers
Ben
Hi Ben,
Glad to hear that the drops helped. Having the synthetic keys in the analysis document will certainly cause issues. To avoid these you need to ensure that there is only one common field between any two tables. One way of acheiving this is to build composite keys to do the joins (eg. Period & ':' & AccountID as PeriodAccountID).
Another thing that I have spotted is that where you are attempting to Concatenate the Budget Description onto your main table this will not get attached to the rows with the data values in them. In order to attach those you will probably need to build a composite key with all four fields (Company, Account ID, Cost Centre and Object Account). I would advise looking at the MAPPING LOAD and ApplyMap statements to attach the description to the main table.
Good luck.
Steve