<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Large Data Model and Performance. in QlikView</title>
    <link>https://community.qlik.com/t5/QlikView/Large-Data-Model-and-Performance/m-p/397432#M558358</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I have an application in development which is going to be accessing a fairly large number of transactional data records.&amp;nbsp; 12 million per quarter and I hope to include a full year of data so that means about 48 Million.&amp;nbsp;&amp;nbsp; (I have already aggregated as much as I can).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;It is possible to divide my Fact table into to discrete pieces about 2/3 in one and 1/3 in the other.&amp;nbsp; Most of the charts do not use both so there could be some savings there however a link table would be needed so not sure how much.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;My question is What is the point where performance will be an issue?&amp;nbsp; Have I already exceeded this at 20+ million rows?&amp;nbsp; Testing with 8 million row tables already has its challenges.&amp;nbsp; Requires server strength to keep out of memory issues from appearing.&amp;nbsp; I do expect that the application will be full featured with many charts and tables.&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Is it worth the separating of the two groupings?&amp;nbsp; Will such take advantage of multi-threading or multi-processor capabilities?&amp;nbsp; &lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Wed, 22 May 2013 18:08:15 GMT</pubDate>
    <dc:creator />
    <dc:date>2013-05-22T18:08:15Z</dc:date>
    <item>
      <title>Large Data Model and Performance.</title>
      <link>https://community.qlik.com/t5/QlikView/Large-Data-Model-and-Performance/m-p/397432#M558358</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I have an application in development which is going to be accessing a fairly large number of transactional data records.&amp;nbsp; 12 million per quarter and I hope to include a full year of data so that means about 48 Million.&amp;nbsp;&amp;nbsp; (I have already aggregated as much as I can).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;It is possible to divide my Fact table into to discrete pieces about 2/3 in one and 1/3 in the other.&amp;nbsp; Most of the charts do not use both so there could be some savings there however a link table would be needed so not sure how much.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;My question is What is the point where performance will be an issue?&amp;nbsp; Have I already exceeded this at 20+ million rows?&amp;nbsp; Testing with 8 million row tables already has its challenges.&amp;nbsp; Requires server strength to keep out of memory issues from appearing.&amp;nbsp; I do expect that the application will be full featured with many charts and tables.&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Is it worth the separating of the two groupings?&amp;nbsp; Will such take advantage of multi-threading or multi-processor capabilities?&amp;nbsp; &lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 22 May 2013 18:08:15 GMT</pubDate>
      <guid>https://community.qlik.com/t5/QlikView/Large-Data-Model-and-Performance/m-p/397432#M558358</guid>
      <dc:creator />
      <dc:date>2013-05-22T18:08:15Z</dc:date>
    </item>
  </channel>
</rss>

