Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Qlik Open Lakehouse is Now Generally Available! Discover the key highlights and partner resources here.
cancel
Showing results for 
Search instead for 
Did you mean: 
Eshwar1
Contributor III
Contributor III

Calculated fields and Percentage value buckets

Hi All,

I need to create 4 different line charts using 4 diff calculated fields  

Dim: Fully_Engaged%

Measure: Count(Branch_Code)

one able to achieve it with using below script

//To create percentage value buckets

Temp_Min_Max:
Load * Inline
[
Min, Max,     Code
0.0%, 10.0%, 10.0%
10.1%,15.0%, 15.0%
15.1%,20.0%, 20.0%
20.1%,25.0%, 25.0%
25.1%,30.0%, 30.0%
30.1%,35.0%, 35.0%
35.1%,40.0%, 40.0%
40.1%,45.0%, 45.0%
45.1%,50.0%, 50.0%
50.1%,55.0%, 55.0%
55.1%,60.0%, 60.0%
60.1%,65.0%, 65.0%
65.1%,70.0%, 70.0%
70.1%,75.0%, 75.0%
75.1%,80.0%, 80.0%
80.1%,85.0%, 85.0%
85.1%,90.0%, 90.0%
90.1%,95.0%, 95.0%
95.1%,100.0%,100.0%

];

 

Data:
Load Branch_code,
Num(Sum(if(Segment='Fully Engaged', Cust_Cnt,0))/Sum(Cust_Cnt),'#,##0.0%') as Fully_Engaged%

Resident Temp

group by Branch_code;

 

Inner join(Temp_Min_Max)
IntervalMatch(Fully_Engaged%)
Load Min,
        Max
Resident Temp_Min_Max;

above total script is working fine as expected.

But i need to create 3 more calculated fields like, Wealth_Engaged% , Trans_Engaged% and Low_Engaged%

and associate Percentages value buckets to it (not able to achieve this)

can you suggest how to write the script for to achieve it.

Thanks in advance

-Eshwar

 

Labels (4)
1 Solution

Accepted Solutions
marcus_sommer

You don't need an intervalmatch + join approach for such simple cluster else you could derive it with a rounding, like:

num(
rangemax(0.1,
   ceil(
     sum({< Segment = {'Fully Engaged'}>} Cust_Cnt) / sum(Cust_Cnt),
   0.05)
),
'0.0%')

in an example how it might be calculated as expression within the UI. If it really needs to be a dimension it might be wrapped with an aggr(Expression, Branch_code).

If it should be really native fields and the cluster-sizes are more irregular I would use a mapping with something like:

m: mapping load Min + (iterno() / 100), Code
     from X while Min + (iterno() / 100) < Max;

and then wrapping the percent-calculation with an applymap() whereby the lookup-value should be rounded to the pattern of the mapping-values.

View solution in original post

5 Replies
marcus_sommer

You don't need an intervalmatch + join approach for such simple cluster else you could derive it with a rounding, like:

num(
rangemax(0.1,
   ceil(
     sum({< Segment = {'Fully Engaged'}>} Cust_Cnt) / sum(Cust_Cnt),
   0.05)
),
'0.0%')

in an example how it might be calculated as expression within the UI. If it really needs to be a dimension it might be wrapped with an aggr(Expression, Branch_code).

If it should be really native fields and the cluster-sizes are more irregular I would use a mapping with something like:

m: mapping load Min + (iterno() / 100), Code
     from X while Min + (iterno() / 100) < Max;

and then wrapping the percent-calculation with an applymap() whereby the lookup-value should be rounded to the pattern of the mapping-values.

Eshwar1
Contributor III
Contributor III
Author

Thank you very much @marcus_sommer 

your below expression is working as i expected

num(
rangemax(0.1,
   ceil(
     sum({< Segment = {'Fully Engaged'}>} Cust_Cnt) / sum(Cust_Cnt),
   0.05)
),
'0.0%')

 

didn't try to use mapping load one , will try to use that also.

Eshwar1
Contributor III
Contributor III
Author

Hi @marcus_sommer ,

in your above expression how rangemax and ceil function works, can you kindly elaborate.

Thanks

Eshwar

 

marcus_sommer

The idea behind the above approach is to limit the possible results of a division from infinity to a certain number by applying a rounding-function with a certain wide - here to 5% clusters. By a common percent-calculation it's restricted to max. 21 possible returns.

In the above case the wanted clusters are not completely equally which means there are further adjustments needed - and here are unwanted outliers fetched with a range-function. Both approaches rounding + fetching outliers might be applied nested n times and/or be wrapped with a pick(match()) to return only certain (unequally) clusters with any kind of string- or dual() returns in certain orders.

The reason of everything is to avoid to apply n (nested) if-loops which aren't easier to develop or to maintain and to avoid that underlying calculation needs to be repeated n times. In the above the calculation is performed only ones and this result is just matched into the wanted cluster. 

Eshwar1
Contributor III
Contributor III
Author

Thank you for Clarification Marcus.😊