I'm not if it's possible and practically to use a combination of binary-loads to get a stable incremental load-process. I assume rather not and I haven't seen or heard yet any examples of them which indicates it would be quite uncommon.
The common way to create an incremenatl load-process is to use qvd's which if they are loaded optimized nearly so fast than a binary-load (especially if the binary is only used for one table) because it's almost the same - the data will be transferred directly into the RAM and the small overhead of to be processed header respectively meta-data will be even with a 15 GB file quite small - the important point is to load the qvd's OPTIMIZED (at least the bigger one).
Here you will find various examples of incremental loadings and the exists-function which will be often needed to ensure the optimized load: Advanced topics for creating a qlik datamodel
Thanks Marcus for your reply.
I have actually gone and set up what I described and it's been running stable and correctly so far... My hopes were that someone may have also tried something similar and could comment on their own experiences... I am a bit swamped with other projects at the moment but when I'm able I will pit the qvd incremental load and this one against one another and see how they fair, time and resource wise... So i can confirm that it works, just not sure how well in comparison to the standard incremental load.
My guess though would be that even though the reloading time for the binaries will be a bit faster, it would still have to open, reload and save 2 qvw's. Where with the standard qvd incremental load the table can be dropped after the qvd has been stored so opening and saving the qvw used for the incremental load will be significantly faster. The scrips finishes running only after the qvd was successfully stored correct?
I didn't believe that binary-loading will have an advantage against qvd-loading from an incremental point of view by single tables. To refresh an whole application it might be useful to load binary the historical data and concatenate the current data to them maybe in a scenario where an application will be refreshed several times a day but you will need further steps to store and append these data to the historical data maybe within a nightly update-window.
I would indeed need to append the data, that's why I thought of this "binary loop" method as a way to save the data without utilizing qvd's. I hope in the future that there will be a script command to store the qvw to overwrite the historical data. It would then work the same as a qvd. But for now the standard qvd method seems the only logical option. Tnx again Marcus for your input.
I would recommend the QVD route as Marcus has said, but have you tried binary loading itself. Might not be so friendly on a large file though.
You could also have a changing binary load command such as this if you have varying loads to make, just need to store/manipulate the filename in file.txt ...
I've played about with using a self binary load in the past as a way to store & distribute data in a secured manner, which QVDs don't. I was only using small datasets, though.
Maybe the best explanation is to make a demo yourself ...
1) Create a new qlikview document with this script ...
Load rowno() as RowID autogenerate 1;
... and save it as Source.qvw.
2) Reload it and save. Every time this is done it reads the existing data & adds a row to the Data table.
So you are binary loading the historical data, then the script adds the incremental data and when it is finished you then have a new file which is now all historical data.
Whether this is faster than a QVD routine, I don't know, and you would still need a routine to back up your data in case it gets corrupted or you need to roll back to a point in time. It's also a bit messy if the data structure changes.
Thank you flipside
This looks like a promising idea to test! However I must admit I fail to understand the Autogenerate function... I have tried reading all I can about it on the community etc. If I understand you correctly the Autogenerate 1 adds 1 row each time it is executed. What I don't understand is why?
So say my script looks like this currently
My files are named as such that the last 6 digits are the date the file was created.
For Each vFile in FileList('Source\*.CSV') //Test to check for new files and load if appropriate
If Right(SubField(vFile, '.', 1), 6) > vLastReload Then
Concatenate ("Binary Table")
(fix, codepage is 1252);
Let vLastReload = Right(SubField(vFile, '.', 1), 6);//Sets vLastReload to latest last 6 digits of last loaded file
Where now would I insert your Load rowno() as RowID autogenerate 1;?
Also is there a script command to save it as Source.qvw?
Looking forward to your reply
The autogenerate part was just to create some data so probably you won't be needed this for your solution. It will be useful for you to know how it works, though. In my example my .qvw file has a table called Data with one field RowID. When I binary load the ,qvw it first loads this table, then that script adds 1 row to it. (If I had used, for example, Autogenerate 2, it would add 2 rows) When the file is saved it has one more row than before, and this repeats every time the script is run. I used RowNo() because this populates the field with the next incremented value - again just for demonstration purposes, you could use anything it would still get added.
Hope this helps.