Skip to main content
Woohoo! Qlik Community has won “Best in Class Community” in the 2024 Khoros Kudos awards!
Announcements
Save $600 on Qlik Connect registration! Sign up by Dec. 6 to get an extra $100 off with code CYBERSAVE: REGISTER
cancel
Showing results for 
Search instead for 
Did you mean: 
asheppardwork
Creator
Creator

Speed up Qlik Sense Viz Lib Line Chart causing out of memory issues.

I need help optimizing or speeding up a Viz Lib Line Chart that get's the difference in amounts from two periods for comparisons. Where the date field to compare get's populated by a drop down chosen by the users and the date ranges of that field are also populated by the user via a VizLib Input Form.

We have a dashboard that requires we show a trend over time of two periods and the difference between the two periods where the users can choose one of four dates and set two independent ranges of those dates.  The first chart is a standard bar chart and is working just fine.  The second chart is a Viz Lib Line Chart (v5.3.1) which does the difference calculation and is super slow and causing out of memory errors after just one or two filters. 

I have tried conditional calculations, and limiting the record set all to no avail.  The chart itself is being populated by dimensions that can be chosen between 1 of 3 alternatives and by a calculated measure that itself has four variables so the users can choose between four different fields.

Measure formula :

=SUM(IF(date([$(vBeginDateField)]) >= Date('$(vStart)') and date ([$(vEndDateField)]) <= Date('$(vEnd)'), AMOUNT)) - SUM(IF(date([$(vBeginDateField)]) >= Date('$(vStart2)') and date ([$(vEndDateField)]) <= Date('$(vEnd2)'), AMOUNT))

These all have pre-populated values so when the sheet comes up the first time it loads relatively quickly.  If the user selects only 1 filter the page slows to a crawl and if they select anything else on the screen at all we get out of memory errors and then have to close the browser entirely to get it to respond.

I tried this in the conditional calculation but it didn't help at all:

=Aggr(
	IF(Num(Rank(Sum(AMOUNT),4,1)) <= vTop, Aggr(Num(Rank(Sum(AMOUNT),4,1)),ROW_ID)
    	, IF(vTop = '', Aggr(Num(Rank(Sum(AMOUNT),4,1)),ROW_ID))
        )

,ROW_ID)

I also tried just hard-coding all the values of the fields except the date ranges and still it has the same behavior.

If you have any insight, it would be very much appreciated.

Labels (3)
1 Solution

Accepted Solutions
marcus_sommer

Your calculations are far away from being optimized - quite independently from the used chart - because nested if-loops and nested aggr() are applied. Therefore I suggest changes within the following directions:

SUM({< [$(vBeginDateField)] = {">='$(vStart)'"}, [$(vEndDateField)] = {"<='$(vEnd)'"}>} AMOUNT)

as logic for the measure and something like

Aggr(Num(Rank(Sum(AMOUNT),4,1)) * -(Num(Rank(Sum(AMOUNT),4,1))<=vTop), ROW_ID)

as calculated dimension - whereby vTop is never empty respectively there is another logic/variable between to return a sensible default-value maybe just 100000 or something like: count(distinct ROW_ID).

 

View solution in original post

1 Reply
marcus_sommer

Your calculations are far away from being optimized - quite independently from the used chart - because nested if-loops and nested aggr() are applied. Therefore I suggest changes within the following directions:

SUM({< [$(vBeginDateField)] = {">='$(vStart)'"}, [$(vEndDateField)] = {"<='$(vEnd)'"}>} AMOUNT)

as logic for the measure and something like

Aggr(Num(Rank(Sum(AMOUNT),4,1)) * -(Num(Rank(Sum(AMOUNT),4,1))<=vTop), ROW_ID)

as calculated dimension - whereby vTop is never empty respectively there is another logic/variable between to return a sensible default-value maybe just 100000 or something like: count(distinct ROW_ID).