Have you tried to load these txt in one step? Against them is only the max. amount of RAM which couldn't be enough. Further do you need all fields from this file and are all these fields really fact-fields or are beside some id's any descriptions to them included - this could be linked later per dimension-tables?
Another thought is to increase the number of rows to maybe 5M or 10M then 1M isn't really big. Also the splitting-logic could be changed from a record-level to a content-level with a where-clause to periods or some categories like products or countries and/or to split the data vertically, too.
Further if you worked inside a network it will be speed up the loads and stores if you transferred it to a harddisc-drive.
Thanks for the answer Marcus,
Yes, in one step, its a little faster, but I need that, because the mid-tier changes sometimes, so I process only the mid-tier, that is fast, and I don't need to read all the txt file again.
Yes I need all the fields (25), this file was created only for Qlikview to read it.
Some fields are dimensions, but time-dimension, so they cannot be linked later.
I didn't test it with different number of rows. I will do it and share the results within a few days.
I dont know how I could split the files using a dimension, lets say: Category, without taking more time than today. could you get in details? Would be something like this??:
allrows: load * from c:\MyBigFile.txt (...);
cat1: load * Resident allrows where Category=1; store cat1 into cat1.qvd(qvd) ; drop table cat1;
cat2: load * Resident allrows where Category=2; store cat2 into cat2.qvd(qvd) ; drop table cat2;
cat3: load * Resident allrows where Category=3; store cat3 into cat3.qvd(qvd) ; drop table cat3;
drop table allrows;
And all files used, for load or store, are local.
Yes I meant something like this - to load everything in one step and in a second loop-step stored it in slices maybe in years as qvd.
Also it could be helpful to think about some transforming-steps within this load(s). For example if are some from these time-dimensions timestamps the split into a date- and a time-field - per floor() and frac() and maybe throw away the milliseconds - would save a lot of RAM and file-sizes from the qvd and could be faster over all. Maybe such logic could be applied to other fields, too:
If you are able to load that large txt file into ram in one step, the splitting part could be done faster by:
Loading MyBigFile.txt into QV table Allrows,
Perform any transformation/modelling here if possible
Store MyBigFile into a QVD, drop table Allrows.
Then loading the large QVD using a where exists with your Category filter field to get a smaller subset of the file.
This would be an optimised QVD load and you'll then be able to store the subset back into a smaller QVD cat1.qvd if needed.
The advantage of this over doing a resident load for each of your filter category’s is that an optimised QVD load will be faster than a resident load, and when you resident load it will need more RAM to store the copy of data.
Hope that helps!