Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello Community,
Got a nice QlikView brain teaser for my first post that I just can't get solved. It seems noone ever had my issue, can't find any mention of it in the board.
I need to prepare a standard Variance Analysis of Turnovers for Sales Controlling.
Is there a possibility to do a calculation on basic record level (no aggregation, yet) and then sum up the result in a chart as if it was a loaded record?
On record level, there are multiple attribute columns (25+). In the QV Chart, only one to three of these attributes are used at a time as dimensions. Depending on the analysis, several charts with different attributes may be used to analyse the cause of variance (product, market, customer, etc)
The sum() formula only returns wrong numbers. It does not calculate on record level (with all attributes) but sums up the data first to the reduced dimensions - and then applies the calculation. The result is a kind of weighted average but not the correct number.
The expressions used in the chart (e.g. Pivot) are variances calculations of different years like
DeltaVolumeEffect = (Volume2011-Volume2010) * Price2010
There are five variance expressions for each variance components that make up the total turnover variance.
The expression formulas can't be moved to the load script because the user must be able to choose the year, datatype and period he needs to analyze.
The aggr() function won't help, either, because there are too many attributes.
Can QlikView do a calculation on record level - and then use the record-level result in the sum() to get the correct number?
Thanks for your assistance,
Jens
Hi Ram,
Yes, eventually.
In the data model, create a combined key for the lowest granularity that you need for the calculation. Leave out the time fields (because these are the basis for the comparison). This gives the level needed for the aggr().
Using sum and aggr gave the correct result for Variance due to Volume Change
=sum(aggr(sum(val_new)/sum(vol_new),NEWKEY) //calculates price_new
*aggr(sum(volume_new)-sum(volume_old),NEWKEY)) //calculates volume difference
The Variance due to Price Change works the same way. I selected "new" and "old" values with Set Analysis and Variables.
Hope this helps.
Regards,
Jens
Hi Jens
May be I don´t understand well your question but you can use the values of your columns if you need to do calculations based on previous calculation.
There are this functions: Column() , ColumnNo(), Before() ..etc and you can do calculation between rows either
wtih Above() Previous() they are all in the help menu
hope this helps
Chema
Jens,
any calculation enclosed inside the sum(), is performed at the record level. If you operate with the multiple aggregation functions, then you are operating with the end results of aggregation.
For example:
sum(Sales - Returns) - at the record level, Returns are subtracted from Sales, and then the results are summarized. A common problem may be caused by the fact that those records that have Sales, don't have any Returns, and vice versa - Returns Records usually don't have Sales. So, one of the two components is always NULL, rendering the whole expression NULL. To overcome this problem, we use RANGESUM instead of a simple addition or subtraction:
sum( RANGESUM(Sales, -Returns)
Contraty to this, the following is the example of the calculation that's done to the aggregated results:
sum(Sales) - sum(Returns).
If you refer to your Expressions by names, you are also using the Aggregated results:
Gross Sales = sum(Sales)
Returns = sum(Returns)
Net Sales = [Gross Sales] - Returns
Using your own example:
sum((Volume2011-Volume2010) * Price2010) is the correct syntax that will be performed at the record level, assuming that Volume2011 and Volume201 are Data Fields and not Expression Labels. If those are Expressions in the chart, then of course you are using the aggregated data.
if I'm missing your problem, please extend or post an example.
cheers,
Oleg
Oleg, Chema
Thanks for your explanations.
The issue may be that I can't use the Data Fields themselves for calculation. I must use SetAnalysis to identify the desired components for the calculation. I can't see a way to tell QV that it must not apply the expression to subtotals.
Here's a more specific example.
There is one DataField for VOLUME which has Dimensions for YEAR, DATATYPE etc.
The VolumeVariance DeltaTO_Vol = ( Volume2012 - Volume2011) * Price2011 would be
DeltaTO_Vol = ( sum ({< YEAR={2012} >} VOLUME) - sum ({< YEAR={2011} >} VOLUME) ) * sum ({< YEAR={2011} >} PRICE )
This does the calculation on record level in the Pivot Chart, which is fine, but then the Expression also runs on the subtotals, which is totally off.
Got any idea how to tell QV to sum up the result instead of running the Expression on subtotals?
Cheers,
Jens
Jens,
you are controlling what to aggregate and when - every time you use sum(), you are effectively "telling" QV to aggregate the calculation.
The problem here is that you want to dollarize 2012 Volume with 2011 Price, and they are not properly associated. The solution might be hidden in the Data Model. Build your data model in such a way that the 2012 Volume is associated with 2011 Price, and your problem becomes much simpler.
For example, if you can add a new data field and call it LastYearPrice, and assign the prior your prices to each Volume, then your Delta Expression could look like this:
sum ({< YEAR={2012} >} VOLUME*LastYearPrice) - sum ({< YEAR={2011} >} VOLUME*PRICE)
Much simpler, right?
cheers,
Oleg
Oleg,
Agreed, it would be simpler if it was possible to change the data model. The request is, however, that a user can select various report states (Combination of YEAR, DATATYPE and PERIOD) for variance analysis. for the first throw, there are at least 6 different options to compare the running ACTUAL to (one Plan of Year, three Plan updates of Year and Previous Year as well as Previous Month for the monthly comparison). In a second step, the users shall be able to analyse the effects of a certain reporting state any other state necessary. And lastly, there are more additional fields than just price.
It can't be the right approach to load the same data multiple times for each reporting state each month.
It seems there's no manageable solution for this issue with SetAnalysis.
Thanks for your help, anyway.
Cheers,
Jens
Hi Jens,
We are having similar requirement. Do you get any solution for this problem? If so, please share with us.
Thanks,
Ram
Hi Ram,
Yes, eventually.
In the data model, create a combined key for the lowest granularity that you need for the calculation. Leave out the time fields (because these are the basis for the comparison). This gives the level needed for the aggr().
Using sum and aggr gave the correct result for Variance due to Volume Change
=sum(aggr(sum(val_new)/sum(vol_new),NEWKEY) //calculates price_new
*aggr(sum(volume_new)-sum(volume_old),NEWKEY)) //calculates volume difference
The Variance due to Price Change works the same way. I selected "new" and "old" values with Set Analysis and Variables.
Hope this helps.
Regards,
Jens
Thanks Jens... This solved our problem...
Hi Jens,
I have similar requirement, Can you please explain in detail what you did to get the required result.
Thanks in advance for your reply.
Thanks
Romil