Skip to main content
Announcements
Qlik Connect 2024! Seize endless possibilities! LEARN MORE
cancel
Showing results for 
Search instead for 
Did you mean: 
warfollowmy_ver
Creator III
Creator III

What will advise how to implement the ABC analysis with giant data?

Topic


Recipe for a Pareto Analysis


my comment-


Thank you for this information! I did it, it works. But when you have the day of sale 30 000 positions and statistics for the 5 years prior to the date ... In general, even for a week the server just stops, since the calculation is carried out in a loop in lines. If, for example take a simple table as an option accumulation, all he said instantly. But it also limits. What will advise how to implement the ABC analysis with giant data?


What can you say guys? Yet I have an idea - previously regarded as ABC in MS SQL 2012 and ready to paste data. This is much faster.

The challenge to have the ABС is not just to see but as an object that allows you to choose and importantly... the importantly thing with a large array of data

4 Replies
marcus_sommer

Regarding to my very small and quite old environment I wouldn't expect serious performance issues with this kind of analysis and your amount of data (about 55 M records, right?). Therefore could you elaborate more details to your environment, how the datamodel looks like and how the data should be displayed.

- Marcus

warfollowmy_ver
Creator III
Creator III
Author

More data are still remnants, a total of about 1.2 billion for 5 years, all running on the server very quickly. But not in this case. In an attachment example. The simplest data model. 90 000 rows. I did everything in the manual. On the local machine when you select all items in general table is not updated.

Throw the document to the server, he got up and data is not updated. The processor is loaded and nothing more. This is because it is considered to be the accumulation of cyclic formula. If you make a choice in a simple table - accumulate, all instantly considered. But such a solution limits.

What do you think? Can I do something wrong? =)))

marcus_sommer

Technically you didn't something wrong but conceptual it doesn't makes much sense to create an ABC analysis on this granular level. I think you need to categorize your SKU field in some way to get a real benefit in the direction of an ABC or Pareto analysis.

Beside them it's not recommended to create such large tables (in the number of columns and/or the number of rows) then it will need large amounts of ressources in RAM and in time - especially if the data comes from different tables, see also: Logical Inference and Aggregations.

In your case it's particular dramatically because of the accumulation-expressions with the rangesum-above-expression. This meant each cell need to calculate the whole expression new for the specific row-level and I assume that this kind of calculation won't be calculated in multi-thread. I haven't looked for it and also not for the RAM consumption but I would not be surprised if it will touch any kind of technically limitations such as of the max. object RAM (also within the qmc are settings for it) and that there was unfortunately no error-message to inform you.

Within the attachment I have set a calculation condition for the table - count(distinct SKU)<=1000 - to prevent too large calculations (it's general recommended to restrict large objects in this kind) and created a simple cluster for the SKU in a listbox - class(keepchar(SKU, '0123456789'), 250) - and if I now select various cluster the table calculate very quickly.

- Marcus

warfollowmy_ver
Creator III
Creator III
Author

This decision does not carry out the task. We need an ABC analysis of all   products and work with groups as objects. I solved the problem in two ways. For live view - built accumulation in simple table, we can instantly see where the treated product. To work with groups as objects - data groups weekly at the entrance to the qv, prepared in mssql 2012.