Skip to main content
Announcements
Qlik Connect 2025: 3 days of full immersion in data, analytics, and AI. May 13-15 | Orlando, FL: Learn More
cancel
Showing results for 
Search instead for 
Did you mean: 
kishorj1982
Creator II
Creator II

Execution is taking long time for where condition

Hi All,

PFA.

 

I have attached the screenshot of the log file.

In the script file we have a condition as per below,

==>
WHERE Num ([Original Planned Delivery Date]) < Num (Today ()) AND (Isnull ([Comparison Date]) OR ([Delivered Quantity Punctuality] - [Original Confirmed Quantity]) < 0);

 

We have a huge volume of data and it seems that because of the above condition takes too much long time to reload.

Could you please advise how do I reduce the calculation time for where condition.

Thanks

13 Replies
kishorj1982
Creator II
Creator II
Author

Thanks, Markus,

Could you please share a sample example for this because I am unable to get how to proceed.

 

Regards,

Kishor

 

marcus_sommer

Some parts of it (creating a 0/1 workday flag based on weekdays and a separate holiday-list which is considered with a mapping) are included here: Master-Calendar-with-movable-holidays. Not included is the cumulating per Peek-or-Previous but it shouldn't be too complicated to apply especially if the calendars are created separately it would be quite simple, like:

load *, rangesum(peek('WorkdayFlagCum'), WorkdayFlag) as WorkdayFlagCum;
load ... normal calendar ...;

The mentioned assigning of the today() value might be possible with a lookup() in another preceeding load. If not it would be of course possible to fetch it with a resident-load on this calendar table and then mapping/joining it back and applying on it the bucket-mapping.

- Marcus

kishorj1982
Creator II
Creator II
Author

Hi Markus,

 

I am trying for this but unable to achieve the results. Also, my results are affecting.

My aim is to reduce the overall reload time for while condition without affecting the results..

Below is the sample code is this.

LOAD *,
Dual (If ([Delivery Difference Days] > 0,
If ([Delivery Difference Days] <= 5, 'Week or Less',
If ([Delivery Difference Days] <= 10, 'Two Weeks or Less',
If ([Delivery Difference Days] <= 20, 'Month or Less',
If ([Delivery Difference Days] <= 40, 'Two Months or Less', 'Over Two Months'))))),
If ([Delivery Difference Days] > 0,
If ([Delivery Difference Days] <= 5, 1,
If ([Delivery Difference Days] <= 10, 2,
If ([Delivery Difference Days] <= 20, 3,
If ([Delivery Difference Days] <= 40, 4, 5)))))) as [To Be Delivered Status];
LOAD [Sales Document Item Key],
Sum ([Date Type]) as [Delivery Difference Days]
GROUP BY [Sales Document Item Key];
LOAD [Sales Document Item Key],
ApplyMap ('FactoryCalendar_map', [Original Factory Calendar] & '-' & Num ([Original Planned Delivery Date] - Iterno () + 1), 0) as [Date Type]
WHILE Num ([Original Planned Delivery Date] - Iterno () + 1) >= Num (Today ());

 

Could you please suggest what kind of modifications do we need to do for this.

 

Thanks

 

marcus_sommer

My last suggestion was not to make some modifications else to change the whole datamodel. If my first suggestions with splitting the preceeding load-chain into several parts and the various mentioned approaches of optimization didn't speed enough the load-times - then you need to do it in a different way.

One would be to implement an incremental logic for it and another my last suggestion to transfer the date-calculations and their bucket-matching into the calendar-level and then just picking the values from there. I don't think that this approach is very difficult to develop else I assume that's easier than your existing one. But you shouldn't do it in your existing application else create a new one and build one or two of these calendars and a reduced dataset of your fact-table and devloping it step by step.

- Marcus