Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello QlikCommunity,
For performance reasons we have denormalized a transactional data model into one single table (we were told this was a best practice for our situation). This has caused much of the measurable data to become duplicated and thus inflated the values when trying to aggregate those measures. Here is an example of what happened when we combined the transaction level data with the product level data:
TransactionID | ProductID | TransCost |
1 | 1 | $10 |
1 | 2 | $10 |
1 | 3 | $10 |
2 | 7 | $55 |
2 | 8 | $55 |
*Multiple products (productID) can be associated to one transaction, and each transaction (TransactionID) only has one cost (TransCost). As you can see, the transaction cost has been duplicated for every time there was a product associated with that transaction.
When I attempt to Sum() the result is: Sum(TransCost)= $140, when in reality it should only be $65. In other BI packages we were able add calculation context in the formula, example: Sum(TransCost) Foreach TransactionID and this would ignore the duplicates.
Is there an equivalent to the above example within QlikView? I have researched Set Analysis, but haven't found a solution yet.
Thank you for sharing your knowledge and expertise!
-Chris
sum(
aggr(
sum(distinct TransCost),
TransactionID
)
)
Thank you! This does seem to be working.
Do you think the approach that we took to create one table out of a transactional system is appropriate as long as we used the solution you posted where needed?
It´s normal fall into this situation when you denormalize your data.
I recommend reading http://community.qlik.com/blogs/qlikviewdesignblog/2012/10/26/mixed-granularity from Henric Cronström,
Henric is always writing great blogs posts, he is a 'must to follow'