<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic ODAG vs Dynamic vs Direct Query with Impala -  Which to use in App Development</title>
    <link>https://community.qlik.com/t5/App-Development/ODAG-vs-Dynamic-vs-Direct-Query-with-Impala-Which-to-use/m-p/1721540#M54886</link>
    <description>&lt;P&gt;I have a requirement which is pushing me to use some of the tools for dealing with big data. I have little&amp;nbsp;&lt;EM&gt;practical&lt;/EM&gt; experience with the available methods, and was hoping the community could help me get started by vetting out what approach might work best.&lt;/P&gt;&lt;P&gt;The source data is a "data cube" (known as Analysis services in Excel) and served through Impala. The platform that was used to build this cube is called AtScale, but that is probably not important.&lt;/P&gt;&lt;P&gt;The cube has 122 dimensions, most of them hierarchical (year &amp;gt; yearmonth, Org L1 &amp;gt; Org L2 &amp;gt;... Org L10, Product Category &amp;gt; Product Type &amp;gt; Product) you get the idea. These can be pulled in as separate dimensional tables with a key at the most granular level tying them to the measures in the fact table.&lt;BR /&gt;&lt;BR /&gt;The cube has around 25 million rows&lt;/P&gt;&lt;P&gt;The Impala query results are limited to 200,000 rows&lt;/P&gt;&lt;P&gt;My main issue is that any level of detail will go over the 200,000 row limit. Trying to get a count of rows for every possible combination of dimension (keys) will also go over the 200,000 row limit. I'm not sure how to approach this.&lt;/P&gt;</description>
    <pubDate>Sat, 16 Nov 2024 02:04:17 GMT</pubDate>
    <dc:creator>deec</dc:creator>
    <dc:date>2024-11-16T02:04:17Z</dc:date>
    <item>
      <title>ODAG vs Dynamic vs Direct Query with Impala -  Which to use</title>
      <link>https://community.qlik.com/t5/App-Development/ODAG-vs-Dynamic-vs-Direct-Query-with-Impala-Which-to-use/m-p/1721540#M54886</link>
      <description>&lt;P&gt;I have a requirement which is pushing me to use some of the tools for dealing with big data. I have little&amp;nbsp;&lt;EM&gt;practical&lt;/EM&gt; experience with the available methods, and was hoping the community could help me get started by vetting out what approach might work best.&lt;/P&gt;&lt;P&gt;The source data is a "data cube" (known as Analysis services in Excel) and served through Impala. The platform that was used to build this cube is called AtScale, but that is probably not important.&lt;/P&gt;&lt;P&gt;The cube has 122 dimensions, most of them hierarchical (year &amp;gt; yearmonth, Org L1 &amp;gt; Org L2 &amp;gt;... Org L10, Product Category &amp;gt; Product Type &amp;gt; Product) you get the idea. These can be pulled in as separate dimensional tables with a key at the most granular level tying them to the measures in the fact table.&lt;BR /&gt;&lt;BR /&gt;The cube has around 25 million rows&lt;/P&gt;&lt;P&gt;The Impala query results are limited to 200,000 rows&lt;/P&gt;&lt;P&gt;My main issue is that any level of detail will go over the 200,000 row limit. Trying to get a count of rows for every possible combination of dimension (keys) will also go over the 200,000 row limit. I'm not sure how to approach this.&lt;/P&gt;</description>
      <pubDate>Sat, 16 Nov 2024 02:04:17 GMT</pubDate>
      <guid>https://community.qlik.com/t5/App-Development/ODAG-vs-Dynamic-vs-Direct-Query-with-Impala-Which-to-use/m-p/1721540#M54886</guid>
      <dc:creator>deec</dc:creator>
      <dc:date>2024-11-16T02:04:17Z</dc:date>
    </item>
  </channel>
</rss>

