<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Performance issues: your advices in QlikView</title>
    <link>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243179#M92881</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Thanks John,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Always appreciate reading your answers. I am also wondering how to pass all filters applied from 1 document to another, I'll see...&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Regarding the other solution (Spliting the document via Publisher), I will talk to my functional team about it!&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Wed, 03 Aug 2011 03:06:31 GMT</pubDate>
    <dc:creator />
    <dc:date>2011-08-03T03:06:31Z</dc:date>
    <item>
      <title>Performance issues: your advices</title>
      <link>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243173#M92875</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I am building a qv application which deals with invoices. My fact table is having around 45 millions of records (for 36months). Number of records will only grow smoothly, as I have been told to expose 36months maximum. My problem is not on the loading steps, as i have followed a multi-tier architecture with several qvw files to handle incremental qvd. I have also avoided synthetic keys, and kept my data model as a star schema. I am not far from the recommendations mentioned in the following topic &lt;A _jive_internal="true" href="https://community.qlik.com/message/111448#111448"&gt;http://community.qlik.com/message/111448#111448&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;My problem is more on the frontend, with a single user connected to the application, I find that navigation and selection are not that fast. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="text-decoration: underline;"&gt;Details of the server:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;Windows 2003 R2 / Entreprise x64 Edition / Service Pack 2&lt;/P&gt;&lt;P&gt;CPU: Intel Xeon E5320 @ 1.86GHz with 8GB of RAM&lt;/P&gt;&lt;P&gt;On the task manager, I can see 8 CPU running.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;My QVW Frontend is 1.9GB. It takes few minutes for 1 end-user to open the document through IE6 with QV plugin; and for every user action, all 8 CPU run at 100% till data are returned to the client. Definitely, when concurrent users will start using the application, it will worsen the performance even more.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I wanted to aggregate the data even more, to reduce drastically the number of records. An easy approach would have been to remove the invoiceID, and group by all other dimensions. Problem is that functional team want to keep this dimension... We are migrating from OLAP technologies, where they used to have the feature 'Drill Through' to access the very detailed data. So, invoiceID would not be used for pivoting, but more for identifying a specific record.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;So, would it be possible that I build 2 qlikview frontends:&lt;/P&gt;&lt;P&gt; - the first one, faster, with the very aggregated data (without the invoicedID)&lt;/P&gt;&lt;P&gt; - the second one, slower, with the detailled data (including the invoiceID)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;End-user will mainly use the first frontend, but when they want to access the detail (drill through), it will direct them to another qlikview file (while keeping the selection made on the 1st file). The 2nd qlikview frontend will have only a single table box.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Is this architecture possible? Feel free to share your advices.&lt;/P&gt;&lt;P&gt;Is the capacity of the server too low for my requirements? Later, I will need to deploy the same application for other subsidiaries, which will definitely increase the number of users and the workload on the server. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks in advance,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Nicolas&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 02 Aug 2011 09:58:53 GMT</pubDate>
      <guid>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243173#M92875</guid>
      <dc:creator />
      <dc:date>2011-08-02T09:58:53Z</dc:date>
    </item>
    <item>
      <title>Performance issues: your advices</title>
      <link>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243174#M92876</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;You're application (1.9 GB) seems too big for 45 million rows. Get the statistics file from the document properties and post it here.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 02 Aug 2011 14:54:29 GMT</pubDate>
      <guid>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243174#M92876</guid>
      <dc:creator>danielrozental</dc:creator>
      <dc:date>2011-08-02T14:54:29Z</dc:date>
    </item>
    <item>
      <title>Performance issues: your advices</title>
      <link>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243175#M92877</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi there, as a general suggestion you may consider to fix your model to a star complaint schema, with a single transaction in the middle and masterdata around. Also, mark the preload option on every application on the server, so the user does not wait that long the first time he opens the application. The other thing that is happening is that you are not taking advantage of the hardware architectures, the processor(intel xeon e5320) still has front bus instead of the new "qpi links" techonology, also the maximum speed of the ram memory supported by the processor is 667mhz, instead of the newer architectures that support 1333mhz and four channel capable memory. Furthermore, the e prefix in the processor means that it is a energy saving variant, it is recommended to use intel x or w prefixed processor.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;You also want to make sure the the "energy options" in the control panel is set to high performance. Qliktech also fixed a performance issue with expressions using set analysis since qv 9 sr7 release, so make sure you have at least that version installed.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 02 Aug 2011 15:29:14 GMT</pubDate>
      <guid>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243175#M92877</guid>
      <dc:creator />
      <dc:date>2011-08-02T15:29:14Z</dc:date>
    </item>
    <item>
      <title>Performance issues: your advices</title>
      <link>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243176#M92878</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Assuming there's no data model error and your charts are reasonably-coded for performance, the "right" solution seems to be to get better hardware.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;That may, of course, not be practical.&amp;nbsp; I think the architecture you're considering should be possible.&amp;nbsp; Keep track of a record count when you aggregate, and then only open up the other document if the record count is small enough.&amp;nbsp; I've never chained from one document to another on anything other than a test basis, so I'm not sure how exactly you pass in all of the filters and such, but I assume it's doable.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I had a similar but less severe issue with one of my applications.&amp;nbsp; What I did with mine was recognize that 90% of the user activity was occurring on only the most recent data.&amp;nbsp; In my case, only the year to date information was really critical, even though I keep 5 years of data.&amp;nbsp; So I use QlikView Publisher to select the current year, reduce the data to match, and create a separate document for that.&amp;nbsp; Now, by end of year, the YTD document will be pretty slow, but still nowhere near as slow as the 5 year document.&amp;nbsp; Perhaps your users spend 90% of their time interacting with only a small subset of your data.&amp;nbsp; If so, perhaps the same approach would serve their needs.&amp;nbsp; Most of their work could be done on the smaller, faster document.&amp;nbsp; Only when they really need to dig into some old (or otherwise uncommon) data would they need to bring up the monster document.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 02 Aug 2011 22:31:00 GMT</pubDate>
      <guid>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243176#M92878</guid>
      <dc:creator>johnw</dc:creator>
      <dc:date>2011-08-02T22:31:00Z</dc:date>
    </item>
    <item>
      <title>Performance issues: your advices</title>
      <link>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243177#M92879</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Ivan,&lt;/P&gt;&lt;P&gt;Thank you for all the explanatins regarding the server &amp;amp; ram details. &lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;TABLE border="1"&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;You also want to make sure the the "energy options" in the control panel is set to high performance. Qliktech also fixed a performance issue with expressions using set analysis since qv 9 sr7 release, so make sure you have at least that version installed.&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;Do you mean: Control Panel &amp;gt; Power Option &amp;gt; Power Schemes = Always On?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 03 Aug 2011 02:59:35 GMT</pubDate>
      <guid>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243177#M92879</guid>
      <dc:creator />
      <dc:date>2011-08-03T02:59:35Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues: your advices</title>
      <link>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243178#M92880</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;BLOCKQUOTE&gt;&lt;TABLE border="1"&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;Daniel Rozental wrote:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;You're application (1.9 GB) seems too big for 45 million rows. Get the statistics file from the document properties and post it here.&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Hi Daniel,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I have loaded the *.mem file in the QlikView Optimizer 8.5.qvw, that I found online. Here it is... Let me know your advices&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 03 Aug 2011 03:01:41 GMT</pubDate>
      <guid>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243178#M92880</guid>
      <dc:creator />
      <dc:date>2011-08-03T03:01:41Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues: your advices</title>
      <link>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243179#M92881</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Thanks John,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Always appreciate reading your answers. I am also wondering how to pass all filters applied from 1 document to another, I'll see...&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Regarding the other solution (Spliting the document via Publisher), I will talk to my functional team about it!&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 03 Aug 2011 03:06:31 GMT</pubDate>
      <guid>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243179#M92881</guid>
      <dc:creator />
      <dc:date>2011-08-03T03:06:31Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues: your advices</title>
      <link>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243180#M92882</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The simple and quickest solution i can suggest based on your problem description is:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;First of all in the default page that opens up when user accesses the QV application, keep only one chart maximized keep remaining all minimized. And also keep the maximized chart with Summary details and take our invoice id and others, provide another chart or table with detailed field information like invoice no, order date, line number etc.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Please find the attached sample that may help you.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 03 Aug 2011 05:44:34 GMT</pubDate>
      <guid>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243180#M92882</guid>
      <dc:creator />
      <dc:date>2011-08-03T05:44:34Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues: your advices</title>
      <link>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243181#M92883</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;BLOCKQUOTE&gt;&lt;TABLE border="1"&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;n.allano escribió:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Hi Ivan,&lt;/P&gt;&lt;P&gt;Thank you for all the explanatins regarding the server &amp;amp; ram details. &lt;/P&gt;&lt;BLOCKQUOTE class="jive-quote"&gt;&lt;P&gt;You also want to make sure the the "energy options" in the control panel is set to high performance. Qliktech also fixed a performance issue with expressions using set analysis since qv 9 sr7 release, so make sure you have at least that version installed.&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;Do you mean: Control Panel &amp;gt; Power Option &amp;gt; Power Schemes = Always On?&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Hi again, yes I meant Power Options (sorry for the translation but I have my os in a different language) and under that menu there should be a power plan named High Performance.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 03 Aug 2011 13:27:47 GMT</pubDate>
      <guid>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243181#M92883</guid>
      <dc:creator />
      <dc:date>2011-08-03T13:27:47Z</dc:date>
    </item>
    <item>
      <title>Performance issues: your advices</title>
      <link>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243182#M92884</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt; hi Nicolas,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I'd echo John's recommendations. We're on v8.5 and generally a document will take over 3 times its diskspace when being loaded into RAM so just by opening a 1.9G document your server with 8G of RAM will be on its knees. &lt;/P&gt;&lt;P&gt;The default setting of qvs limit the qvs.exe process to 70% of vailable RAM anyway so just with this the system will be under stress. Given your description of processor activity I'd say the system is working overtime due to the size.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;So first option is to see if you can add more RAM - not the most expensive thing to do - and it might just do the trick.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;However I'd challenge your functional team on the design. Who is the application for? What questions is it set up to answer? Who needs to drill down to individual invoices? How often do they need to do it? What information is at an individaul invoice level that they need to see?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The above is not to say that this shouldn't be done or has no sense but by pursuing those questions you will be helped in defining your architecture.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Look at your stats. 3 years of data, 45m invoices, 48k customers, 5k products. That is a lot of activity to analyse. So who is going to be digging down to the bottom level?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Do you penalise your top execs who want a nice dashboard or do you build a seperate document as you describe for those who need to drill down to the lowest level. Can the lowest level be split into different areas of responibility - geographical? organisational? -&amp;nbsp; or by date as John suggests?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Qv have introduced document chaining which in theory will help you with the idea of drilling through from one document to another. This isn't available in v8.5 but what you could do is build several lowest level documents and then have one button which will launch the detailed document and by playing with current selections, userids etc. "know" which detailed document to launch when a request is made.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;anyway bon courage and thanks for posting - it is an interesting theme&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 03 Aug 2011 14:58:58 GMT</pubDate>
      <guid>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243182#M92884</guid>
      <dc:creator>pat_agen</dc:creator>
      <dc:date>2011-08-03T14:58:58Z</dc:date>
    </item>
    <item>
      <title>Performance issues: your advices</title>
      <link>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243183#M92885</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Qvw Optimizer looks good, you should look into adding more memory or building different applications with data aggregated at different levels.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;If you can, try removing as many fields as possible from the invoices table since it's quite big with 89 fields, removing 8 fields would probably cause around a 10% decrease in memory needed.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 03 Aug 2011 18:18:13 GMT</pubDate>
      <guid>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243183#M92885</guid>
      <dc:creator>danielrozental</dc:creator>
      <dc:date>2011-08-03T18:18:13Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues: your advices</title>
      <link>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243184#M92886</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Daniel,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I am working closely with the functional team to review their requirements. From the 89 fields, we have removed 12 of them, mostly dimensions. Therefore, it has affected the result of my aggregation.&lt;/P&gt;&lt;P&gt;For 1month, I used to have 2.1millions records (QVD = 200MB); with this operation I have reduced my records by 50% (1million records / QVD = 75MB)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;At the very begining, I was avoiding Synthetic Key with a simple selection/concatenation of fields.&lt;/P&gt;&lt;PRE __default_attr="plain" __jive_macro_name="code" class="jive_text_macro jive_macro_code"&gt;&lt;P&gt;FieldA &amp;amp; FieldB &amp;amp; FieldC as TableKey&lt;/P&gt;&lt;/PRE&gt;&lt;P&gt;Now, I am experiencing the function Autonumberhash128; I hope it will remain accurate (through QVD).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I am not sure what is the best between:&lt;/P&gt;&lt;PRE __default_attr="plain" __jive_macro_name="code" class="jive_text_macro jive_macro_code"&gt;&lt;P&gt;Autonumberhash128(FieldA &amp;amp;'|'&amp;amp; FieldB &amp;amp;'|'&amp;amp; FieldC) as TableKey&lt;/P&gt;&lt;/PRE&gt;&lt;P&gt;or&lt;/P&gt;&lt;PRE __default_attr="plain" __jive_macro_name="code" class="jive_text_macro jive_macro_code"&gt;&lt;P&gt;Autonumberhash128(FieldA, FieldB, FieldC) as TableKey&lt;/P&gt;&lt;/PRE&gt;&lt;P&gt;I have tried both, it does not make any difference regarding the storage of the QVD on the disk. One of the field is a date field (with timestamp / let's say FieldC); I am not sure if I should make any operation on the date before building the TableKey.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I may have reduced drastically the number of records, but I will face soon or later other performance issues as my server configuration is not optimal, and few deployment of my application will be done on the same server, and multiplication of users will occur. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I will keep updating this thread with my progress. Feel free to share further advices.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 05 Aug 2011 04:19:24 GMT</pubDate>
      <guid>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243184#M92886</guid>
      <dc:creator />
      <dc:date>2011-08-05T04:19:24Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues: your advices</title>
      <link>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243185#M92887</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Quickly gave up on the AutoNumberHash* as I need the id to be persistent within several qvw scripts...&lt;/P&gt;&lt;P&gt;So, I will make use of Hash128 or Hash256 despite the output is String...&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 05 Aug 2011 09:23:13 GMT</pubDate>
      <guid>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243185#M92887</guid>
      <dc:creator />
      <dc:date>2011-08-05T09:23:13Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues: your advices</title>
      <link>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243186#M92888</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I am still trying to optimize my qlikview file. Here are few scenario/measure:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Remarks&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;gt; ProcessingTime = Time to load all my qvd in my "Frontend.qvw" Enduser will then access this file through IE Plugin&lt;/P&gt;&lt;P&gt;&amp;gt; For now, I have no idea about the response time offered the enduser. Later, a team will be in charge of doing some benches of my application (accessing the app through IE plugin / re-run several times pre-defined scenario / tests with concurent users,...) but I can't wait for the result to optimize my application,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;1st try: All dimensions / No aggregation / Star schema&lt;BR /&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Records: 66.5M (=33months) / Filesize: 2.6Gb(Compression:High) / ProcessingTime: 9-10min&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;2nd try: 1st selection of dimension / 1st aggregation / Star schema&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Records: 44.5M (=36Months) / Filesize: 1.85Gb(Compression:High)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;3nd try: Final selection of dimensions + aggregation / Star schema&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Records: 33.5M (=36months) / Filesize: 1.32Gb(Compression: High) or 2.95Gb(Compression:None) / ProcessingTime: 5-6min&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;4th try: Final selection of dimensions + aggregation / Left join of all satelite tables (except Customer &amp;amp; Product Referentials)&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Records: 33.5M (=36months) / Filesize: 730Mb(Compression: High) or 3.2Gb(Compression:None) / ProcessingTime: ~40min&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;5th try: Final selection of dimensions + aggregation / Left join of all satelite tables (including Customer &amp;amp; Product Referentials) = 1 single table&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Records: 33.5M (=36months) / Filesize: 880Mb(Compression: High) / ProcessingTime: ~1h20min&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We would all agree that adjusting data model has huge impacts on the filesize (at least the compressed one) and the time processing. Between scenario #3 and #4, the processing time is multiplied by 8 whereas storage on the disc is divided by 2. In my opinion, I would not mind to explain my project team that we are going to increase the processing time so end-users can enjoy better performance. But would it be the case? Would scenario #4 be faster then #3?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;My data model is quite simple; I do have 1 fact table: Invoices, 2 main referentials (Customer: 38k records &amp;amp; Product: 5k records) and other satellite referential (usually 2 columns: Code +&amp;nbsp; Label, &amp;lt;100 records). For the satellites tables, I would have prefered to use Mapping than LeftJoin, but I am stuck with the scenario mentioned in this thread &lt;A _jive_internal="true" href="https://community.qlik.com/message/138537#138537"&gt;http://community.qlik.com/message/138537#138537&lt;/A&gt;. I am looking forward to see how mapping would affect the processing time in my scenario #3&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;For all scenario mentioned above (except the 1st one), I have retrieved the statistic file and loaded them in the attached Optimizer file. Apart from the recommendation given in the Optimizer file ("Take a close look at "bytes" column in Actual Usage tab to find out which object or field is costing the most memory"); what else should we look for? Anyone could explain the meaning of each Class (ex: State Space) and SubType (State, Symbols,...)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;From what I have read:&lt;/P&gt;&lt;P&gt; - Once the file will be accessed by the 1st user, RAM consumption will be equal of the filesize with no compression. Is it true? Or should I refer to the tab 'Comparison'&amp;nbsp; (from the Qlikview Optimizer)?&lt;/P&gt;&lt;P&gt; - Then for every additional user, add 10-15% RAM consumption&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Feel free to share your experience and give me some advise.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 08 Aug 2011 12:40:40 GMT</pubDate>
      <guid>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243186#M92888</guid>
      <dc:creator />
      <dc:date>2011-08-08T12:40:40Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues: your advices</title>
      <link>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243187#M92889</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Would it be practical to map in the labels during the CREATION of the QVD instead of when you read it in?&amp;nbsp; I often take that approach.&amp;nbsp; End user applications usually show the descriptions or instead of or in addition to the codes, so to make things simple and let them do an optimized load, I often add descriptions to the main QVDs.&amp;nbsp; Mapping during creation of the QVD won't break an optimized load, and won't hit the bug you found in the linked thread.&amp;nbsp; I typically keep product and customer data on their own tables, though, but again, would have the descriptions on those tables in addition to the codes.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I suspect that having the descriptions on your three main tables will give a slightly faster user experience than keeping them on separate tables.&amp;nbsp; I do think only slightly, though.&amp;nbsp; Testing, of course, will tell you for sure, but if you can't wait for that, then I'd aim for putting the descriptions on the main tables and just trying to do that as efficiently as possible.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 08 Aug 2011 16:51:34 GMT</pubDate>
      <guid>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243187#M92889</guid>
      <dc:creator>johnw</dc:creator>
      <dc:date>2011-08-08T16:51:34Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues: your advices</title>
      <link>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243188#M92890</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi John,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Mapping the label during the creation of qvd, would mean that everytime I initiate the batch (daily or monthly, I do not know yet; but at least whenever referential has changed), I re-process all my historic of data (36months x 1million records/month), and this would be quite costly for the processing time. I tried it at the very begining (before filtering, fields selection, and aggregation), it tooks 3h for 66millions records. Now that I am left with 36millions records, the processing should be below 2h (quick estimation).&lt;/P&gt;&lt;P&gt;I would definitely save some time on loading those qvd (containing labels) into my frontend (right now, I am more into the scenario #4 = 40min), but over all, I would require 1 extra hour (which is precious for the maintenance team, when functionals are asking for a quick refresh after they have modified referentials.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks for your suggestion, and sharing your experience! Any help for the understanding of the Optimizer file, and the expected RAM consumption?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 15 Aug 2011 10:15:38 GMT</pubDate>
      <guid>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243188#M92890</guid>
      <dc:creator />
      <dc:date>2011-08-15T10:15:38Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues: your advices</title>
      <link>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243189#M92891</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;So do your labels change?&amp;nbsp; Ours generally do not, so we can combine mapping of labels with incremental loads.&amp;nbsp; In that case, mapping during QVD creation saves additional time because we're only mapping a small portion of the records instead of every record and often multiple times as would occur if we did it during the load of the user applications.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;If they change, but not routinely, you can also do a full reload only when they change.&amp;nbsp; Of course that requires that you be notified that they've changed in some way, and isn't a very robust procedure.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 15 Aug 2011 18:27:39 GMT</pubDate>
      <guid>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243189#M92891</guid>
      <dc:creator>johnw</dc:creator>
      <dc:date>2011-08-15T18:27:39Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues: your advices</title>
      <link>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243190#M92892</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi John,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Yes, our labels change. So for the moment, I can't follow such recommendation; and with the 2nd option you have suggested, I agree with you that robustness would be difficult to meet.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;As I am still waiting for the benchmark of my frontend, I was trying to optimize my data loading process, and there is something i can't figure out. &lt;/P&gt;&lt;P&gt;I am working on my fact data, and trying to prepare my Production environment, so I loop over the last 36months to retrieve data and build all my *.qvds.&lt;/P&gt;&lt;P&gt;Once deployed in Production, process will only take care of the last month. But I have noticed a different behavior, &lt;/P&gt;&lt;P&gt; 1/ whether the query is coded within my QV script&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; SQL Select *&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; FROM FactTable&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; WHERE (InvoiceDate &amp;gt;= '5/1/2011' and InvoiceDate &amp;lt; '6/1/2011')&lt;/P&gt;&lt;P&gt; 2/ or the query is a Stored Procedure which is called from QV script&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; SQL EXEC SP @StartDay = '5/1/2011', @EndDay = '6/1/2011'&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;DB Engine is MS SQL2008, setup on a dedicated server. I have played both scenario twice and made sure that there was no other activity on both servers (QV and DB). Here are my results:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;My current implementation:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt; A/ Retrieve the fact data with a simple query as given above and store into qvd (1 per month)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; &lt;SPAN style="text-decoration: underline;"&gt;Number of records retrieved&lt;/SPAN&gt;: green continuous line / right axis (around 2millions records/month)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; &lt;SPAN style="text-decoration: underline;"&gt;Processing time&lt;/SPAN&gt;: blue dotted line / left axis&lt;/P&gt;&lt;P&gt; B/ Aggregate/Filter previous data and select fields as per funtional requirement and stored into separated qvd (1 per month)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; (DB connection is no longer needed as data are still within my QV process)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; &lt;SPAN style="text-decoration: underline;"&gt;Number of records aggregated&lt;/SPAN&gt;: purple contiuous line / right axis (around 1million record/month)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; &lt;SPAN style="text-decoration: underline;"&gt;Processing time&lt;/SPAN&gt;: red dotted line / left axis&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Result of scenario 1 (QV Script: SQL Select)&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;IMG __jive_id="6737" alt="QVDataLoadingSQLSelect.JPG" class="jive-image-thumbnail jive-image" src="https://community.qlik.com/legacyfs/online/6737_QVDataLoadingSQLSelect.JPG" width="450" /&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Result of scenario 2 (QV Script: SQL Execute)&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;IMG __jive_id="6736" alt="QVDataLoadingSQLExecute.JPG" class="jive-image-thumbnail jive-image" src="https://community.qlik.com/legacyfs/online/6736_QVDataLoadingSQLExecute.JPG" width="450" /&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;As you can see:&lt;/P&gt;&lt;P&gt; - In scenario 1, the processing time for the query is not stable over the 36months, it varies from 2"30min to 11"30min, to retrieve the same amount of data (avg: 5"48min)&lt;/P&gt;&lt;P&gt; - In scenario 2, the processing time for the same query (now coded in a Stored procedure) is now much more stable, but average time is also higher, as it varies from 9"15min to 14min (avg: 10"22min)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I have not done anything special on the DB side, apart setting up some indexes on the InvoiceDate; Nothing on the StoredProcedure. Have you experience the same? I was expecting the SP to bring better performance. What are your advices on this point?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 24 Aug 2011 05:20:23 GMT</pubDate>
      <guid>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243190#M92892</guid>
      <dc:creator />
      <dc:date>2011-08-24T05:20:23Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues: your advices</title>
      <link>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243191#M92893</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I'm mostly unfamiliar with MS SQL.&amp;nbsp; I can't think of any reason for the stored procedure to take meaningfully longer or be more stable than a straight cursor read assuming you're using the same SQL in both.&amp;nbsp; I'm also not sure why you would expect the stored procedure to have &lt;EM&gt;better &lt;/EM&gt;performance.&amp;nbsp; In DB2, I might be able to use tricks like multi-fetch to save round trips to the DBMS to improve performance in a stored procedure compared to SQL in QlikView, but we mostly don't use stored procedures, so I'm not certain.&amp;nbsp; I guess I'm saying I won't be much help at this point due to lack of relevant experience.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 24 Aug 2011 16:36:06 GMT</pubDate>
      <guid>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243191#M92893</guid>
      <dc:creator>johnw</dc:creator>
      <dc:date>2011-08-24T16:36:06Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues: your advices</title>
      <link>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243192#M92894</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Nicolas. I think that you should try to use Direct Discovery feature. This feature was first introduced in QlikView 11.20. It allows to create the data model with aggregated data which will be used by most of users. When somebody needs to get detailed information (in your case functional team) for each separate invoice this data will be queried directly from the database according to current selections in QlikView application.&amp;nbsp; In my opinion Direct Discovery can be used if detailed data is required rather rarely in other case it will cause high load on your RDBMS. I have attached some documents to this reply.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sun, 11 Aug 2013 15:52:04 GMT</pubDate>
      <guid>https://community.qlik.com/t5/QlikView/Performance-issues-your-advices/m-p/243192#M92894</guid>
      <dc:creator />
      <dc:date>2013-08-11T15:52:04Z</dc:date>
    </item>
  </channel>
</rss>

