Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Join us in Toronto Sept 9th for Qlik's AI Reality Tour! Register Now
cancel
Showing results for 
Search instead for 
Did you mean: 
QFanatic
Creator
Creator

A better way to code this scenario

hi guys,

I need some assistance here.  I am reading from 2 HUGE tables and I'm getting into performance issues.

So in my model I do the following..

1. I read a list of Products that we sell in a 'For_next' Loop

2. Then I go look in Table1, per one day - the Sessions for that specific product (you will see this in an Inner Select)

3. Then I take the Sessions that I got in (2), read ALL the info for that IN Table2, for the WHOLE month - not just one day.

So in essence, I'm looking where customers bought ONE specific product on ONE day of the month - all the other products , and the detail, for the rest of the month as well.

 

There has to be a better way to do this.

Thank you

 

Labels (1)
1 Solution

Accepted Solutions
marcus_sommer

If you really want to store a file for each product you will of course need an outside-loop which iterates through each product. But this loop could happens on a final table which contained for all data the appropriate matchings / transformations. Means the aim should be to do all the matchings - all product and all sessions and so on - together (of course it may need several processing-steps) and the final storing-step comes afterwards.

I think in many scenarios it will be much faster as your approach doing the matching/transformation measures n times against the whole dataset.

Beside this I suggest to re-think the whole approach. Is it really a necessity and/or a benefit to slice the data on a daily product level ...

- Marcus

View solution in original post

5 Replies
marcus_sommer

I think pulling both tables into Qlik and doing the matching there could be much faster as your approach. Even if the amount of records is large by just those few fields the query execution and the data-transfer shouldn't take too much time.

- Marcus

QFanatic
Creator
Creator
Author

Thank you Marcus, much appreciated.
QFanatic
Creator
Creator
Author

Marcus,

The requirement is also, that when all the processing is done, that I write out a file, per Product, and I'm not too sure how to do this.

so I would have multiple rows of data containing Session, Product and other fields. How to take this from a table, in Product order, and write it out?

 

thank you

 

 

marcus_sommer

If you really want to store a file for each product you will of course need an outside-loop which iterates through each product. But this loop could happens on a final table which contained for all data the appropriate matchings / transformations. Means the aim should be to do all the matchings - all product and all sessions and so on - together (of course it may need several processing-steps) and the final storing-step comes afterwards.

I think in many scenarios it will be much faster as your approach doing the matching/transformation measures n times against the whole dataset.

Beside this I suggest to re-think the whole approach. Is it really a necessity and/or a benefit to slice the data on a daily product level ...

- Marcus

QFanatic
Creator
Creator
Author

hi Marcus,

thanks for your response.

I agree with you that the process needs to be re-visited, and maybe this is not really something for Qlik to do, but more on the Data engineers side...The Tables where I source the data from literally has billions of rows...

 

Thanks again