where date#(BKDate,'YYYY-MM-DD') >= date#('$(sFromDate)','YYYY-MM-DD') and date#(BKDate,'YYYY-MM-DD') < date#('$(sToDate)','YYYY-MM-DD')
and TOTAL_SECONDS > 0;
applymap('MapParameterToGroup', Activity.ParameterName & '-'&TemplateID,'') as Activity.ParameterGroupName
drop table Activity_temp;
The crosstable performs amost equally as the for loop. if needing to split the crosstable in two tables then using the for loop is quicker than the cross table load. In my experiment with 1,000,000 rows and 13 columns the for loop finished in 60 seconds and the crosstable in 72 seconds. For me both are too slow when data size increases.
Unfortunately i need to transpose the table in order to display the data nicely in a straight table with horizontal linear gauge.
Hopefullly i am abusing the cross table load somehow in a way that it can be speeded up!
My other option is to create an extension object but then i have the problem of not being able to print that in a pdf report which is required.
If an incremental load is possible for the crosstable then I should also be able to do that for the for loop alternative. The performance gain would (probably) be similar for both alternatives and the crosstable load would therefore still be slower. I have tested the 2 alternatives with different sizes of input data which would simulate the lesser amount of source data that an incremental load would give me.
My issue is with the performance of the crosstable load. I just cannot understand why the crosstable load is so slow. The for/where solution to me is a very costly/bad solution which i didn't really think would be quicker than a built in algorithm.