Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello All,
I have below data set :-
V_MASTER:
LOAD REPORT_DATE, // snapshot date
LOB,
ISSUENO,
ISSUELOGGER,
ISSUERECEIVED,
ISSUELOGGED,
ISSUECLOSED,
STATUS,
SAC,
SAC_NAME,
SAC_BILLING_TEAM,
CONTACTNAME,
CONTACTNO,
OWNINGID,
OWNERNAME,
ENQCOM,
NEXTKCI,
UPDATES_COUNT,
LASTTOUCHED_DATE,
DIARYDATE,
EMAIL,
SOURCESYSTEM,
PRODUCT,
QUERYTYPE,
REASON_FOR_CONTACT,
CAUSE,
ADJ,
ADJAMOUNT,
DISPUTEDVAL,
REMINDME,
SOURCEACC,
FIRSTDESTINATION,
FIRST_TEAM,
ISSUE_RECD,
ISSUE_LOGGED,
OPEN_AGE_DAYS,
OPEN_AGE_BAND,
OPEN_WKG_DAYS,
OPEN_WKG_DAYS_BAND,
LOGGING_ADVISOR,
QUEUE,
UNALLOQUEUEID,
OWNINGADVISOR,
OWNING_SITE,
OWNING_TEAM,
OWNER,
OWNING_MGR,
APPVERSION,
LASTTOUCHED_WKG_DAYS,
LASTTOUCHED_WKG_DAYS_BAND,
PROGRESS,
NEXTINVOICEDATE,
MPADATE,
OBITEAM,
MANAGER,
MANAGEDBY,
INTEXT,
LASTNOTEINPUTBY,
LASTNOTEDATE,
LASTNOTE,
OPENATCOP,
TRANSFERMTH,
REGION,
CLOSINGID,
ISSUE_CLOSED,
CLOSED_AGE_DAYS,
CLOSED_AGE_BAND,
CLOSING_ADVISOR,
CLOSING_SITE,
CLOSING_TEAM,
CLOSER,
CLOSING_MGR,
COPYBILLS,
MEDIUM,
ISSUETYPE,
RAG_STATUS,
SUMMARY,
QUEUENAME,
CUSTOMER_NAME,
ACCOUNT_NUMBER,
COMPANY_NAME,
ASSIGNEE,
REPORTER,
CATEGORY,
COMPONENT,
REQUESTED_DATE,
CREATED_DATE,
UPDATED,
RESOLVED,
CYCLE_TIME,
RESOLUTION,
DAYS_SINCE_LAST_COMMENT,
ISSUE_LINKS,
VOTES,
START_DATE,
REASON,
WORK_COMMENCED_DATE,
END_DATE,
COMPANY_NAME2,
ACCOUNT_NUMBER2,
QUERY_TYPE2,
CUSTOMER_REQUESTED_DATE2,
CONSEQUENCE,
FIRST_RESPONSE_DATE,
LAST_UPDATER,
PARTICIPANTS,
TRIGGER,
VULNERABILITY,
INVOICE_NUMBER,
INVOICE_NUMBER2,
LAST_NAME,
ORIGINAL_ESTIMATE,
ITEM_COUNT,
CHASE_DATE,
RECEIVED_FROM,
COUNTRY,
AMOUNT,
QUERY_ANALYSIS,
FILE_DATE,
AVG_LOGGED_TIME,
AVG_LOGGED_TIME_BAND,
CLOSED_WEEK_COMM,
CLOSED_WEEK_ENDING,
MTH,
COUNT,
ISSUE_CLOSER,
SEGMENTATION
From....
Now , I need to apply incremental load on the basis of these 2 fields:-
So, When the data for new date is loaded , it should delete all the issues whose STATUS is 'Open' and 'Unallocated' and then load the new updated record. Here my issue number is primary key and it will make sure no duplicates are created.
How can I achieve it ? Can anyone please help ?
Thanks in advance
Hi ,
Can anyone please help me in this ?
Thanks
Hi @Aspiring_Developer . Sorry, I am not as heavily involved in Qlik anymore so I don't access the forum as often as I used to. Without a good understanding of the requirement, my assumption would be that you can load your new data file and then load the old data where the new data does not exist. Let's assume you have a file with all of your data called Old.qvd and you get a new dataset from some load statement. Then I would:
1. Load my new data using the load statement.
2. Load the data from the Old.qvd where IssueNO does not exist.
3. Save the new table into Old.qvd (to be able to use the newest version of Old.qvd next time).
My assumption is that you would actually want to replace any data which already existed before with the new data if any updates were made which is now returned in your new load.
I hope this makes sense.