Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Dear all,
I have below code running in qliksense to get the profit and margin for each ID from the fact table.
I have total 300000 unique ID for each of them I am gettting profit from database package.
Problem is it is taking too much time which is offcourse because for each single ID it is getting connected to database package which has its own script and then as function it is returning its profit to the qliksense.
I need to understand if there is any possible solution to fetch the records faster..
Fact:
LOAD
ID
FROM TEST.QVD;
For i=0 to fieldvaluecount('ID)'
Let vFieldIDValue =FieldValue('ID','$(i)');
Profit:
LOAD*;
Select ID, Profiltfrom table (pkg.profilt('$('vFieldIDValue '))
NEXT;
DROP TABLE FACT;
So, in above code the highlighted is the database package and profitmargin is the function which takes single ID from fact table as parameter and do its CALCULATION AND RETURN the result as profit.
NOTE: when we execute "Select ID, Profiltfrom table (pkg.profilt('$('vFieldIDValue '))" in database directly it takes very lesss time than the qliksense
Hi @Tool_Tip,
Don't you have in your Database an already calculated profit table?
Regards,
Mark Costa
Read more at Data Voyagers - datavoyagers.net
Follow me on my LinkedIn | Know IPC Global at ipc-global.com
Hi Mark,
it could have table then it was not an issue for us. But unfortunately, it is getting calculated inside packages with function passing single parameter.
With the above shown mix of measurements between Qlik and the data-base you couldn't speed up the performance. Therefore you need to transfer more essential logic into the data-base or if it's not wanted into Qlik.
Thank you marcus for your valuable response but changing database because of qlik performance I do not think it's solution.
Just need solution from qlik if it is possible because database is working fine and returning records within seconds...
The long execution times isn't a special matter of Qlik else it's caused from the inefficient way of performing the task. Regardless of the calling tool or API it couldn't be fast and I'm quite sure it's even slow within the source data-base with the method of looping through a longer list/array/table and initializing in each iteration variables + a procedure and storing the results anywhere.
Each time multiple threads and handles are created which needs to be coordinated between the call and the data-base and the underlying OS + storage/network which will always have a latency of n milli/nano-seconds - and in the end the waiting/idle-times might be (much) bigger as the real processing times beside the fact that there is much redundancy in regard of initializing/checking all the variables + table-calls.
The mentioned routine may be sensible by calling them from a front-end form for a single customer/product but they is IMO not suitable to create massive data-sets. Therefore my suggestion to do the essential processing in one place and all together - maybe just filtering the relevant tables with the list-table by an inner join and the applying on top the needed aggregation/transformations. The existing procedure should be a valuable template to develop an appropriate new routine.