Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Dear Community:
For the past few days I have been playing around with getting a P/L statement to work, but so far I have failed to accomplish the objective and therefore I would like to ask the community for some input.
I am working for a shipping company and our voyages generally last from a few days to several weeks or is some cases several years. In a Year to Date Profit-Loss statement we would like to include only the percentage of a particular costs that can be allocated based on a key of the time of a specific voyage within as a percentage of the total voyage duration. To do this I have used the following formula:
SUM(
VOS_Actual_AmountUSD
* AGGR((SUM([Calendar - On Hire] * [Calendar - Year 2 Date]) / [Voyage Net Duration]), _KEY_DBFixture)
)
This and the example datamodel with simplified data can be found in the enclosed QVW file. The object of interest is the orange object with the name P/L.An explaination for the various columns:
Does anybody have an idea how to fix the formula? Please do not hesitate to ask clarifying questions if needed.
Thanks.
JD
How's this? Seems better to me.
SUM(
VOS_Actual_AmountUSD
* AGGR((SUM([Calendar - On Hire] * [Calendar - Year 2 Date]) / [Voyage Net Duration]), [Dashboard Account],[Fixture - Voyage No],_KEY_VOSRecord)
)
Thanks for the example file.
How's this? Seems better to me.
SUM(
VOS_Actual_AmountUSD
* AGGR((SUM([Calendar - On Hire] * [Calendar - Year 2 Date]) / [Voyage Net Duration]), [Dashboard Account],[Fixture - Voyage No],_KEY_VOSRecord)
)
Thanks for the example file.
Hi Jay:
With a small modification - change [Fixture - Voyage No] to _KEY_DBFixture (obvious change if you know the ins and outs of the solution - it seems to be working correctly. Thank you very much!
Basically your solution suggest that the AGGR() function allows to set more than one dimension along which this needs to be calculated. Can you confirm that? The reference manual is not very clear in this respect.
Once again! Thanks.
Jochem
You can use as many dimensions as you like with AGGR. There are basically two ways that it is used in pivot tables:
(1) you want the subtotals to look right, in which case you specify all the same dimensions in AGGR that are used in the pivot table
(2) you want to compare a single fact to a group of facts, such as employee salary to average department salary, in which case you specify all the dimensions in the pivot table EXCEPT the employee dimension, and probably use the NODISTINCT modifier for AGGR
Have fun!
Please be cautions when using this type of formulea as it can be very memory/CPU intensive. I came to the conclusion to recalculate things a bit since my 64bit server spend more than 5 minutes calculating with 8GB RAM before ending the process without providing an answer.
Hmmm... something's wrong if that happened. How many rows in your fact table?
650,000 and then it is for each of these rows calculating the percentage - in the AGGR statement - with which it needs to be multiplied. This results 650,000 times the AGGR().
I took a closer look at your data model and it's making some sense. You are referencing key fields, using fields on 4 tables, using compound text keys, and using a field in the AGGR that is not in the Pivot table. These are all adding to the performance issue.
The last one, using a field in AGGR that is not in the Pivot table, I think you can work out a way around that.
For the text compound keys _KEY_DBFixture and _KEY_VOSRecord, look at using AUTONUMBER() or AUTONUMBERHASH256() in the script to create a numeric equivalent. Numeric comparisons will be at least 10X faster than text comparisons!
Those two do not require any changes to your data model.
As far as using key fields in the AGGR, this is only an issue when one table has many more key values than another and you want to aggregate on the fewer key fields. If that is not the case, and the key is numeric, then don't worry about it.
Lastly, if you still have a performance issue, try joining two tables so you go from 4 to 3 and seeing if that changes anything.
Hope that helps!
Hi Jay:
Thanks for the explaination. I am aware of the issue related to the keys and I intent to change it in the future. However, it requires a major redesign of the entire data model (this is just a corner). Will do it during a redesing that I intent to do later the summer. For the moment I proceed with doing some work in the script.
If you had to choose between AUTONUMBER AND AUTONUMBERHAS256 which would you choose?
Thanks!
Jochem
Autonumber works fine since you already have the strings concatenated in your script. If you have any memory issues during load, try autonumberhash256(). These two functions create different hash tables during the load and in your case autonumberhash256 would be smaller.