it sounds quite strange. You are sure that you handle with the right qvd's - not from a different folder, overwritten by any other routine? There are no where clauses which might effect this? The Filesize itself is not a problem by only 655 MB.
I would check the filetimes and the records for this - QvdCreateTime + QvdNoOfRecords - and store them to after wrinting the qvd's and checking them before and after I load them.
It seems that there is some confusion in the archiving_logic - I have taken it out of the critical data_loading script yesterday, now it is in a separate app - and there seem to be some discrepancies between the tables being appended - maybe one is qualified and the other is not or so - there are redundant fields ... Let's see.
Ah - now I know what the problem is though I don't understand it:
- I now have a separate app just containing this archiving logic, ok?
- In this app, I have an >> UNQUALIFY *; << command at the beginning;
- The archive table is loaded straight from the database, so the fieldnames should not be qualified;
- After that part of the archiving_script, the archive table is stored away, but it is not dropped, the current one is appended right away;
<=> Still, when I load the archive table in a test_app I have, the fields are all qualified with the table_name;
Now I know what is causing my troubles, it should be relatively easy to fix it - but why is that happening? I don't know if this behaviour is stable because I don't know what is causing it, so fixing it now might be a very short-sighted thing to do ...
Can anyone help me here and tell me what could be causing this?
Thanks a lot!