Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
So I'm trying to follow the instructions for Insert, Update and Delete found here:
When I load my script, only the incremental load rows are coming through. I think it's doing an inner join based only on the incremental load table limiting the result to just the new rows.
Here's Qlik's Code:
QV_Table:
SQL SELECT PrimaryKey, X, Y FROM DB_TABLE
WHERE ModificationTime >= #$(LastExecTime)#
AND ModificationTime < #$(ThisExecTime)#; --I get 5,633 rows here, incremental table
Concatenate LOAD PrimaryKey, X, Y FROM [lib://DataFiles/File.QVD]
WHERE NOT EXISTS(PrimaryKey); --I get 5,005,986 rows here, primary data table
Inner Join SQL SELECT PrimaryKey FROM DB_TABLE;
If ScriptErrorCount = 0 then
STORE QV_Table INTO [lib://DataFiles/File.QVD];
Let LastExecTime = ThisExecTime;
End If
The end result is only 5,633 rows, when I expect 5633 + 5005986 or 5,011,649.
Is there something I can do to tweak this? I really need the total table, not just a piece of it.
So here's my update. The issue was the SQL command. We originally tried to create a primary key (concatenated fields) in Qlik Sense. Once I moved the Key back into our data warehouse, the code runs fine. I believe the issue was with the SQL syntax we had used to make our key. Thank you everyone!
yes it is because of inner join..
why you need that inner join code in your script? (bold / highlighted one)
Regards,
Prashant Sangle
Thanks for replying. I believe it's to drop the deleted records that exist in the Database.
So here's my update. The issue was the SQL command. We originally tried to create a primary key (concatenated fields) in Qlik Sense. Once I moved the Key back into our data warehouse, the code runs fine. I believe the issue was with the SQL syntax we had used to make our key. Thank you everyone!