Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Dear all,
I'm trying to improve the performance of SAP incremental extracts from SAP. In this case I'm loading BKPF (header) and BSEG (lines).
As per today we extract the BKPF table by filtering the CPUDAT (creation date), but this is not sufficient as the header documents may be changed (clearing, reverse clearing, etc).
To make sure we identify such transactions apply the following filters:
FROM BKPF
WHERE
BLDAT >= '$(vMinDate)' //Posting Date
OR CPUDT >= '$(vMinDate)' //Entry Date
OR AEDAT >= '$(vMinDate)' //Change Date
OR BUDAT >= '$(vMinDate)' //Document Date
OR STODT >= '$(vMinDate)'//Reverse Posting Date
Can you please share some ideas on how we can secure that all relevant updates are extracted and ensure query performance?
If you perform a sql-query the performance isn't really related to Qlik else to the database, the driver and the network because Qlik doesn't execute it else just transfers the task.
In your case it seems that SAP is the most likely bottleneck. So you may increase the resources and/or the priorities on this side. Further an investigation on the SAP community how it processed some statements may give some hints if there is any potential for a optimization. One might be not to query a date >= another date else you may do date1 - date2 and checking if the result is positive or negative. By applying such logic directly with Qlik features it might be included within a range-function. Pure theoretically such an approach has benefits but depending on the real processing it might not be significantly.
- Marcus
If you perform a sql-query the performance isn't really related to Qlik else to the database, the driver and the network because Qlik doesn't execute it else just transfers the task.
In your case it seems that SAP is the most likely bottleneck. So you may increase the resources and/or the priorities on this side. Further an investigation on the SAP community how it processed some statements may give some hints if there is any potential for a optimization. One might be not to query a date >= another date else you may do date1 - date2 and checking if the result is positive or negative. By applying such logic directly with Qlik features it might be included within a range-function. Pure theoretically such an approach has benefits but depending on the real processing it might not be significantly.
- Marcus