Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
I need to chart sum(cost) / count(order) by day with the count of orders aggregated to a branch location. The trick is how to aggregate the order count properly. In the attached qvw, there are 3 locations, 5 orders, and cost amounts for a 7-day period. The order volume for location 1 and 2 is 2, and for location 3 it is 1. I want the cost per unit chart to express the daily cost for each location divided by the total orders for each location (e.g., Location 1 and 2 costs are divided by 2 each day, and Location 3 cost is divided by 1 every day). I was able to express the units properly on day 1 only. In order for the cost per unit chart to be expressed properly, I must not have empty values for subsequent days... Thanks in advance for any advise you offer!
The right aggregation approach is to use TOTAL and not aggr when aggregating a dimension. Here is a discussion of the difference by hic: http://community.qlik.com/blogs/qlikviewdesignblog/2013/09/09/aggregation-scope
In the discussion below the blog he points out that a dimensional scope may be included by adding a field reference after TOTAL so that the aggregation will disregard all dimensions except that field. So, in my example, the expression should be Count(TOTAL <Location> Distinct Order) to obtain a constant number of units by location over time. This divided into cost provides the desired cost per unit result and chart.
The solution to this is attached.
Michael's script suggestion below is fine for the example, but the drawback I found with aggregating in script is that as more dimensions are added, the resulting tables become large, synthetic keys are created, and performance suffers.
I think you just need to add the Day to the aggr:
Sum(Cost)
/
Aggr(Count(Distinct Order),Location,Day)
Is that what you want?
In a chart, as long you have Day and location as the dimension, then sum(Cost) / count(order) will give you the desired result.
If you want to hard code location 1 as count of 1 and location 2 and 3 as 2 then use
=if(Location =3,Sum(Cost)/1,sum(Cost)/2)
Michael, thanks. You would obtain the correct values with the latter hard-coded solution, but my data model has many locations and the order volume will change, so I need to have a dynamic calculation.
BTW, I could add LOAD Location, Count(DISTINCT Order) As Volume in the script as a second table and then use something like Max(Volume) in the chart expression denominator. I would prefer to do the aggregation in the chart and avoid fixing the aggregation in the script though.
Yeah the only way I can think is in the Script.
Table1:
LOAD * INLINE [
Day, Cost, Location, Order
1, 40, 1, 1000
2, 100, 1, 1000
3, 150, 1, 1000
4, 200, 1, 1000
5, 250, 1, 1000
1, 30, 2, 2000
2, 50, 2, 2000
3, 100, 2, 2000
4, 120, 2, 2000
5, 150, 2, 2000
1, 30, 3, 3000
2, 50, 3, 3000
3, 50, 3, 3000
4, 80, 3, 3000
5, 100, 3, 3000
1, 45, 1, 1001
3, 105, 1, 1001
5, 155, 1, 1001
6, 205, 1, 1001
7, 255, 1, 1001
1, 35, 2, 2001
2, 55, 2, 2001
3, 105, 2, 2001
5, 125, 2, 2001
6, 155, 2, 2001
];
LOAD DISTINCT
Location,
Order as DistinctOrder
Resident Table1;
The expression in the chart would be Sum(Cost)/Count(DistinctOrder)
Thanks Michael,
This works, though it practically limits the fields I can use more than a solution within the chart expression would. Unfortunately it does not look like this can be done via a chart expression.
Steve
The right aggregation approach is to use TOTAL and not aggr when aggregating a dimension. Here is a discussion of the difference by hic: http://community.qlik.com/blogs/qlikviewdesignblog/2013/09/09/aggregation-scope
In the discussion below the blog he points out that a dimensional scope may be included by adding a field reference after TOTAL so that the aggregation will disregard all dimensions except that field. So, in my example, the expression should be Count(TOTAL <Location> Distinct Order) to obtain a constant number of units by location over time. This divided into cost provides the desired cost per unit result and chart.
The solution to this is attached.
Michael's script suggestion below is fine for the example, but the drawback I found with aggregating in script is that as more dimensions are added, the resulting tables become large, synthetic keys are created, and performance suffers.