Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi All,
I've got a lot of data in a SQL database, to speed up the loading time I decided it would make sense to load the data in chunks. Load some of the data, store it in a qvd file, load more data and the qvd file, then repeat, building up a qvd file with all but the most recent data.
When I have tried this the initial load takes 10-15 mins, this is saved to a qvd file fine. I then load the new data (takes the same sort of time), load the qvd file (loads much quicker), but then the CPU shows as maxing out for over an hour, without the Close button on the script execution process becoming available.
Any ideas as to why the process is not completing as normal when the data has finished loading?
Thanks,
Richard
My guess is that the new data load and the qvd have a different field list, causing the creation of two seperate tables and then massive key generation at the end of the script. Is that possible? If so, make the field list match or use the CONCATENATE keyword when loading the QVD.
-Rob
The incemental load logic may not be proper. For me, against a 25 million record full load, the incremental is taking only 10 minutes max. So I would suggest to review your load logic.
- Arun
My guess is that the new data load and the qvd have a different field list, causing the creation of two seperate tables and then massive key generation at the end of the script. Is that possible? If so, make the field list match or use the CONCATENATE keyword when loading the QVD.
-Rob