Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
I used the standard "Test Script" in QlikView to load 5.000 records.
Now i want to remove 1 TransID (with all the lines) and add a new version of the TransID. Using Partial Reload
// Get list with records that needs to be deleted
t2:
ADD
ONLY
LOAD * INLINE [
_TransID, Delete
5000, Y
];
// Get list of all existing TransID Excluding TransID's in t2
t3:
ADD
ONLY
LOAD Distinct
TransID
RESIDENT Transactions
Where Not Exists(_TransID,TransID);
// Remove TransID's which are in t2
Right Keep (Transactions)
t4:
ADD ONLY
LOAD
*
Resident t3;
// Dropping PartialReload Tables
If IsPartialReload() = -1 Then
Trace Partial;
Drop Table t2;
Drop Table t3;
Drop Table t4;
End If;
// Concatenate new version of TransID
Concatenate (Transactions)
Add ONLY
LOAD * INLINE [
TransID ,TransLineID, Dim1 ,Dim2, Dim3, Expression1, Expression2, Expression3, Num
5000 ,1 ,Z ,z ,N ,777, 0, 0.666, 6
];
See also attachment.
My question : anyone got a faster way to do this? Perhaps there is a faster way to great the t3 table or remove the record from the "Transactions" table.
Thanks in advanced
Have you ever used QVD files? Maybe the best solution for you will download your transaction table in QVD file and then you can fast manipulate this data??
That's what we do now. it's too slow. it's mil of records.
Did you try to replace the right keep with a exist?
that where exists needs to be on my main transaction table. This is slow. It contains alot of columns.
or perhaps i don't quite understand.
How would you please this script with a 'exist'?
Right Keep (Transactions)
t4:
ADD ONLY
LOAD
*
Resident t3;
I'm not sure if a partial loading is the best approach for a fast loading of data whereby I have no real own experience on the matter - only playing a bit with it some years ago but never used it in real scenarios. If I interpret the various answers to load-performance here in the community right it's really rarely used which makes me think that's like a preceeding load another and more or less convenient way of loading data but not the fastest one.
Beside this I think that your approach contained a detour by reading all TransID's and using them within a keep-statement as filter. I would probably use the following (without partial a load):
temp: load 5000 as TransID autogenerate 1;
table: load * from/resident Source where not exists(TransID);
concatenate(table) load 5000 as TransID, ... other fields autogenerate 1;
drop tables temp;
The important point is to keep all the loadings optimized which is only possible with a where-statement with exists() and a single parameter on it. As far as there are two parameters or other transformations a load needs to be processed record by record and will be quite slow compared to the optimized way.
- Marcus