<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Very poor Aggr() performance in App Development</title>
    <link>https://community.qlik.com/t5/App-Development/Very-poor-Aggr-performance/m-p/1781479#M60809</link>
    <description>&lt;P&gt;Hi&amp;nbsp;@antoine81&amp;nbsp;, as you mentioned, there is a lot of rows to process, so, maybe you have to think about create that result set into a new table in your data model.&lt;/P&gt;&lt;P&gt;But please try this expression, maybe works :&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Avg(&lt;BR /&gt;Aggr(Sum([Sell Out With Taxes Group Currency]), [Contact Id])&lt;BR /&gt;/&lt;BR /&gt;Aggr(Count(Distinct [Order Id]), [Contact Id])&lt;BR /&gt;)&lt;/P&gt;</description>
    <pubDate>Wed, 10 Feb 2021 01:54:15 GMT</pubDate>
    <dc:creator>QFabian</dc:creator>
    <dc:date>2021-02-10T01:54:15Z</dc:date>
    <item>
      <title>Very poor Aggr() performance</title>
      <link>https://community.qlik.com/t5/App-Development/Very-poor-Aggr-performance/m-p/1781232#M60794</link>
      <description>&lt;P&gt;Hi everyone,&lt;/P&gt;&lt;P&gt;I'm having performance problems using the Aggr() function to answer a specific use case.&lt;/P&gt;&lt;P&gt;Please see attached my data model that contains:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;a FACTS table that is essentially order lines (so multiple rows per customer), total of 17 million rows&lt;/LI&gt;&lt;LI&gt;a CALENDAR table that only has 36 rows (for 36 months) and that is linked on the month to the FACTS&lt;/LI&gt;&lt;LI&gt;a PERIOD table that maps each month to a period starting on Jan of the same year (I want to easily display Year To Month = from the first month of the year to the current selected month)&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;It gets tricky when I want to calculate calculations such as &lt;EM&gt;Average Basket&lt;/EM&gt;&amp;nbsp;(sales / transactions) or &lt;EM&gt;Items Per Transaction &lt;/EM&gt;(quantity of items / transactions)&amp;nbsp;as they need to be calculated on a customer level, to avoid high-paying or very frequent customers to make those 2 measures increase.&lt;/P&gt;&lt;P&gt;What I'm doing is I'm using an Aggr() to first calculate the 2 measures by customer, and then I run an Avg() over the Aggr() to get the global average.&lt;/P&gt;&lt;P&gt;Something like this:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;Avg(
    Aggr(
	 Sum([Sell Out With Taxes Group Currency])
         /
         Count(Distinct [Order Id])
       , [Contact Id]
    )
)&lt;/LI-CODE&gt;&lt;P&gt;&lt;BR /&gt;So far so good. The formula works fine and returns the expected results. But the problem is in the loading time:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;it's acceptable (3-4 seconds) when there is a filter on the&amp;nbsp;&lt;EM&gt;YearMonth&lt;/EM&gt; (field directly inside the table)&lt;/LI&gt;&lt;LI&gt;it's a nightmare (20-25 seconds) when there is a filter on the&amp;nbsp;&lt;EM&gt;Period&lt;/EM&gt; (field from a linked dimension table, and that also increases the dataset size)&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;The weird thing is that when I create a table (in the front end) to display what the Aggr() is supposed to do, the table loads and displays all the results in 4-5 seconds (vs 20-25 seconds for the Aggr()).&lt;/P&gt;&lt;P&gt;Any idea how I can speed this up (without remodelling)?&lt;/P&gt;&lt;P&gt;Any idea how the Aggr() works and why it's so much longer to compute compared to a classic table?&lt;/P&gt;&lt;P&gt;Thx a lot!&lt;/P&gt;&lt;P&gt;Antoine&lt;/P&gt;</description>
      <pubDate>Sat, 16 Nov 2024 17:30:09 GMT</pubDate>
      <guid>https://community.qlik.com/t5/App-Development/Very-poor-Aggr-performance/m-p/1781232#M60794</guid>
      <dc:creator>antoine816</dc:creator>
      <dc:date>2024-11-16T17:30:09Z</dc:date>
    </item>
    <item>
      <title>Re: Very poor Aggr() performance</title>
      <link>https://community.qlik.com/t5/App-Development/Very-poor-Aggr-performance/m-p/1781479#M60809</link>
      <description>&lt;P&gt;Hi&amp;nbsp;@antoine81&amp;nbsp;, as you mentioned, there is a lot of rows to process, so, maybe you have to think about create that result set into a new table in your data model.&lt;/P&gt;&lt;P&gt;But please try this expression, maybe works :&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Avg(&lt;BR /&gt;Aggr(Sum([Sell Out With Taxes Group Currency]), [Contact Id])&lt;BR /&gt;/&lt;BR /&gt;Aggr(Count(Distinct [Order Id]), [Contact Id])&lt;BR /&gt;)&lt;/P&gt;</description>
      <pubDate>Wed, 10 Feb 2021 01:54:15 GMT</pubDate>
      <guid>https://community.qlik.com/t5/App-Development/Very-poor-Aggr-performance/m-p/1781479#M60809</guid>
      <dc:creator>QFabian</dc:creator>
      <dc:date>2021-02-10T01:54:15Z</dc:date>
    </item>
    <item>
      <title>Re: Very poor Aggr() performance</title>
      <link>https://community.qlik.com/t5/App-Development/Very-poor-Aggr-performance/m-p/1781617#M60822</link>
      <description>&lt;P&gt;Qlik needs to create a virtual table for each object which provides the dimensional context on which the expressions are performed. This step is single-threaded and usually the heaviest part within a calculation while nearly all "normal" aggregations are multi-threaded.&lt;/P&gt;&lt;P&gt;An aggr() is the same because it creates at first such a virtual table and performed then the expression. If aggr-constructs are used within expressions Qlik mustn't be created a single virtual table else as much as aggr() exists - and within a following step those tables needs to be matched with each other because the results need a relation. This means aggr() will be always very expensive measures and should be avoided if any possible.&lt;/P&gt;&lt;P&gt;In your case you couldn't avoid the use of an aggr() but you could adjust your datamodel or using a different approach to get YTM results. Adjusting seems rather easy and means here not to connect the period table on the calendar else connecting it directly to the fact-table. How much will it improve the performance - a bit probably not enough for you because the period table remains an as-of-table.&lt;/P&gt;&lt;P&gt;I do like the idea of as-of-tables and the easiness to implement such logic and to use it within the UI but from a performance point of view it's not the most performant way to get the needed views. Alternatives could be to create appropriate flags - especially if the views base mainly on today() or month(today()) which are then used as multiplicator and/or within a set analysis condition. Should the views be completely flexible in regard to the selections it needs a bit more efforts, for example with creating a continuous month-counter field - maybe created with autonumber(year &amp;amp; month) - and then querying the max. value of it within the set analysis and calculating appropriate offset-values with it.&lt;/P&gt;&lt;P&gt;Beside this you should also keep your objects as small as possible - means not using such an aggr() within a pivot with a lot of dimensions else using a table-chart with just a few dimensions. Of course you may now need multiple objects to see all wanted data but usually that's not a disadvantage else a benefit because the smaller objects are much better readable as large tables in which you must scroll a lot and/or permanent track that you are looking in the right rows and columns.&lt;/P&gt;&lt;P&gt;- Marcus&lt;/P&gt;</description>
      <pubDate>Wed, 10 Feb 2021 11:54:10 GMT</pubDate>
      <guid>https://community.qlik.com/t5/App-Development/Very-poor-Aggr-performance/m-p/1781617#M60822</guid>
      <dc:creator>marcus_sommer</dc:creator>
      <dc:date>2021-02-10T11:54:10Z</dc:date>
    </item>
  </channel>
</rss>

