Skip to main content
Qlik Introduces a New Era of Visualization! READ ALL ABOUT IT
Showing results for 
Search instead for 
Did you mean: 
Contributor II
Contributor II

Trending a point-in-time / semi-additive metric calculated from transactional data

Hellow Fellow Qlikies,

I'm trying to figure out if there is a scalable way of building an expression that would allow me to show a trend of a metric that is normally calculated as point-in-time based on live transactional data.

A good example is a metric like "Count of open cases". Assuming that I have the detailed case data, I can calculate the count of open cases as of Date X as follows. The following is simplified pseudocode for readability.

Sum(Cases Created Prior to Date X AND Still Open) + Sum(Cases Created Prior to Date X AND Closed After Date X)

This approach works fine and allows me to get the total of open cases as of start/end of any given period, e.g. This Month, Last Quarter, etc. It requires a bit of tweaking of a set expression to have it take min/max date of the selected period. And it also enables breaking the total down into groups using any supported dimensions, e.g. see the count of open cases by product.

I run into trouble when I try to show a monthly trend of open cases based on the same logic. Since my expression is calculated across the entire set, the resulting min/max dates are from the entire time frame instead of being bound to each month on my chart.

The most promising solutions I could find so far revolve around utilizing FirstSortedValue, along with some embedded aggregations. Following is a simplified example:

FirstSortedValue(Aggr(Count of open cases), Month, Date), -Aggr(Date, Month, Date))

This seems like a step in the right direction but things get pretty messy as I start to aggregate my initial logic by different grains of time. And I have a hard time thinking about throwing additional dimensions into the mix for generating further breakdowns, such as monthly open cases by product.

Would appreciate any advice and further guidance people may have upon reading this blurb. Happy to provide more details and capture good solution options for those who may have this question in the future.

Labels (3)
3 Replies

Hi @ozz1k ,

this kind of problems is better to solve in the load script. If you only need to show it in a line chart or a table, it worths to precalculate it aggregating values by month and any other dimensions you need to filter.

Best regads.

Contributor II
Contributor II

Hi Mario - Thank you for the reply. Precalculating is always an option and it definitely makes the frontend logic much simpler. That said, calculating on the fly has its benefits, especially when dealing with non-additive metrics like distinct count or average. I wanted to see if there is a smart way to organize a set expression that would enable the required calculation based on the raw/transaction data.


One idea could be to have dimension island table in the data model like below and having always one value selected, all these values must be the actual field values of your data model - 

load * inline [







Since you always have one value selected in this table, you can use $(=Dimension) in your set expressions/Aggr expression dimensions to dynamically switch the chart dimensions.