If the amount of your data are rather small and/or your database is fast enough to deliver the data in your needed time-frame you could use an where exists clause on the preceeding load like:
ID_ISSUE_FIELD_STRING AS ISSUE_FIELD_STRING_ID,
FIELD_ID AS CATEGORY_ID,
FIELD_VALUE AS CATEGORY_VALUE,
1 AS ID_STORAGE_TYPE
where not exists(ISSUE_ID)
But this meant that each record from the database is pulled and checked.
If this this not feasible I would go with your workaround of loading the data within a loop only if no other alternatives would be possible because I assume that this would be the slowest approach.
And this could be to concat the ISSUE_ID as a string and using them within a not in() clause - AFAIK this often limited to a certain number of parameters and/or number of chars. Also thinkable might be to use something like max(ISSUE_ID) if there could be ensured that each new ID must be greater than the previous ones.
Another way would be to rewrite the in Qlik loaded ID's again into the database - maybe with something like this: Write Back to Database via ETL process (using CSV or XML) - or maybe even easier to store these ID's within a parallel process directly within the database and using these ID's within a join to filter the data.
It's not a problem on the Qlik side because if you have the data there you could easily restrict them with something like where exists(). The problem is the SQL which isn't directly touched from Qlik else it will just transferred as it is to the database and Qlik just received the results - this meant the SQL database + driver would need to be able to handle the amount of values which Qlik delivers for the where clause ...
Therefore if there is no other field which could be used with a numeric operator like >= than you need a join-approach within the sql like hinted in my previous post.
If this is at the moment not possible or too expensive you could optimize your loop-approach by using the in() where clause with 999 parameter instead of going to each single value - reducing the needed number of loop iterations by nearly 1000.
The problem is the restriction are created from my qvd, like your ideia (that i think very good) only works if a have a table with a PK. How can i create a incremental qvd from a table withou a pk?
I've expected use a simple Where Clause with a qvd Field like:
FieldFrom Database Not In (QvdField),
but ok, i think that's not possile at moment
If it was a table that represented a changing list where it was important to track changes, I'd have a date in it.
Or it would be a short list where a complete reload would be fine.
If this is your real world issue, loading even thousands of records with 2 fields should be fast in Qlik; once resident, you could use Exists() as Marcus suggests.
If the database is outside of your control, build a DataWarehouse to create tracking.
Qlik works best when used against normalized or DW/Star Schema designs. It can't work with fields that don't exist.