Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
I'm not sure that I really understand what you want to do. But it must be slow - at least with a data-set of around 1 M of records.
This will mainly caused from the outside-looping against the table. For each variable-assignment Qlik jumps from the loop to the table to fetch the indexed value which could be regarded as a I/O which needs to be handled between Qlik and the OS and will have some delaying between them.
Further you creates at least 1 millions loads which needs to be initialized by checking the exists of the field-names and the field-values. Whereby it a more as a million because the nested loop also creates a cartesian product.
Some years ago we had a bit similar task by creating kml-files with various local/regional areas just from the lat/long information from our pos. For this we needed a logic to find the most outer pos and as a possible solution we identified the Graham scan - Wikipedia. I didn't tried to implement it as program-logic in QlikView else gave it a colleague which developed various solutions in power shell, java and python. They were all working but not faster as creating appropriate cartesian tables in QlikView, performing there all calculations and then ordering/filtering/aggregating the final results.
Therefore the use of the external programming hadn't had benefits for us and it also showed that's possible to transfer such logic into native load-statements. Qlik is really optimized for it but it's not a programming tool like java or python.
In regard to your task I could imagine two ways of improving the performance. One would be a bit similar to your current approach but not looping through the data-table else through the system-tables. Each field is a system-table with only the distinct field-values and a bit-stuffed pointer to the data-table. If there is significantly degree of redundancy in the data the needed number of iterations would be appropriate reduced. Further I think the calculation of d might be separated from the various if-loops - means creating it separate and mapping it back and doing then all the other stuff.
Another way would be to create the mentioned cartesian product with an inside-loop within the load, here an example of the general method:
load *, recno() as RecNo, rowno() as RowNo, iterno() as Iterno,
StartValue + iterno() - 1 as ContinuousValue
from Source while StartValue + iterno() - 1 <= EndValue;
In the end you may not mandatory need the various record-id's but they are quite useful to validate the data and/or order/filter/match them in any way. Also with this kind of logic you might be able to skip duplicates by loading distinct or from the field-values or implementing where-clauses. Also here it might be possible to separate the calculations and matching the results afterwards.
Nevertheless it will remain a quite heavy task. At last I suggest to consider to implement an incremental logic and storing historical calculations in sliced (maybe YYYYMM or similar categories) qvd's.
Yes, when doing it over the entire dataset it' a really tall task (quadratic time complexity over a n > 10^6 dataset), but even when I downsized the problem to i = 1000, j = 3 it took a really long time.
But if I understand what you're saying then it's the looping over tables logic that takes such a long time, so creating a cartesian table and doing the calculations on it will be much faster?
Yes, in my experience is creating cartesian tables with several millions records quite fast especially if the tables are rather small with just a few fields. This is in general valid for join- and for loop-approaches, means for example:
t: load Product from Products; join(t) load Date, 0 as Values from Calendar;
which might be then concatenated to a fact-table to create a fact-record for each Product & Date or to create a basis-table for any mapping targets as well as:
load *, rowno() as RowNo;
load *, ContinuousValue1 * ContinuousValue2 as X;
load *, StartValue2 + iterno() - 1 as ContinuousValue2, iterno() as Iterno2
while StartValue2 + iterno() - 1 <= EndValue2;
load *, recno() as RecNo, iterno() as Iterno1,
StartValue1 + iterno() - 1 as ContinuousValue1
from Source while StartValue1 + iterno() - 1 <= EndValue1;
in which multiple inside-loops are used within a preceding load.
Both methods are just a show-case to demonstrate possibilities and there are quite probably ways to reduce the efforts. Important here is to look if really everything needs to be created at first and then filtered or if the creation might already include some conditions. For example I would rather not apply such join like above else mapping the created and release date from the products and then applying a while-loop like from the second show-case which could reduce the number of records significantly as well as improving the data-quality by not providing old dates for new products.
For your case I could imagine similar possibilities, for example by creating the cartesian product and the on-top calculations only ones and not for each record and the results are then mapped to the records.
I've gone deep in your code, but could you do your cartesian join without the loop and then put the computation in a load?