Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi
I am trying to set a benchmark for when we will create QVD files rather than loading for each model. Does anyone have some pointers for me? As an example if there are more than 500,000 records and these are going to used in more than one model then use a QVD. I am trying to standardise how we are doing things but just wanting to know if anyone else has any suggestions re the number of records.
If you are going to implement QVD load in a file, I find it best to implement across all data imports(except inline). If QVD load is used consistently, it helps organize and simplify the load process. The question of memory usage is another beast, depends on # of records & # of columns.
Use Incremental load .. But remember for incremental load you should have the Primary key and modification date. Now it depends on what type of increment you wanna apply ..
Most of the times, I prefer storing the data into QVD's as a first step. Basically this enables me;
1. Join data from various resources in a single file,
2. Add new calculated fields at once and store them in the QVD so that they will be available for use at any time,
3. Simplify the load script of the end-user QVW and thus substantially reduce the reload time of the end-user
document .
Hope this helps.
Regards,