Here is how I do my incremental load since the full table is about 14 million rows and takes forever. The one drawback is that I DO NOT add in updated records. This script could be more genericized to make it more robust.
LET vQVDexists = NOT ISNULL(QVDCreateTime('lib://blahblah/v3\ExpediteAuditExtract.qvd')); LET vNow = Now(); IF $(vQVDexists) THEN; TRACE('QVD Exists, so we are going to append'); LatestUpdate: LOAD Date(Max(updated_dt)) as MaxDate FROM [lib://blahblah/v3\ExpediteAuditExtract.qvd] (qvd) ; Trace('Got to the LatestUpdate'); LET vMaxDate = Peek('MaxDate',0,'LatestUpdate'); TRACE('Max update' & '$(vMaxDate)'); NewData: LOAD * ; SQL SELECT stuff FROM "TableOfInterest" WHERE updated_dt > '$(vMaxDate)' ; Trace('Concatenating the new data with the old data'); Concatenate(NewData) LOAD * FROM [lib://blahblah/v3\ExpediteAuditExtract.qvd] (qvd) ; STORE NewData into [lib://blahblah/v3\ExpediteAuditExtract.qvd] (qvd); DROP Tables NewData, LatestUpdate ; ELSE Trace('QVD does not exist, so we are doing a full load'); DATA: LOAD everything... ; SQL SELECT everything FROM... ; STORE DATA into [lib://blahblah/v3\ExpediteAuditExtract.qvd] (qvd); DROP Table DATA ; END IF ;
This doesn't remove stale records and bring in updated records. But I believe this could be done using the whole "WHERE NOT EXISTS ('somekey','somekey')" construct where you don't concatenate the old records if their primary key already exists.
Hope this helps.