13 Replies Latest reply: Jan 2, 2014 11:55 AM by Terry Kavouras RSS

    How do we know a qvw is a good qvw?



      I'm wondering how do we benchmark the performance of qvw?


      I have 4 millions rows, in 2 flat tables with sequential numbers as keys to link the 2 tables.

      one for fact, one for dimensions.


      When loading the chart with set analysis, i still see the hour glass. i wanted to remove the hour glass totally.


      Does this mean going on a more powerful machine would help?


      Could someone share your experience?



        • Re: How do we know a qvw is a good qvw?
          Gysbert Wassenaar



          Just to make sure, you do see that the two tables are linked in the table viewer? If you see two separate tables without a line connecting them then the tables are not associated. In that case make sure the key fields that should link the tables get the same field name.

          • Re: How do we know a qvw is a good qvw?
            Marcus Sommer

            Befofre thinking about a more powerful machine you should rather optimize your load-script and calculate there most of your expressions or at least pre-calculated some expressions-parts and re-structured your datamodel. Helpful could be to join and/or concatenate your tables in on big fact-table - calculations within a table is most often faster as over several tables. Sometimes you must decide to optimize your script-speed or your GUI-speed ...


            - Marcus

            • Re: How do we know a qvw is a good qvw?
              Rob Wunderlich

              4 million rows is a relatively small QV app and your data model sounds reasonable. I suspect your expressions(s) can be optimized. Can you post your expressions?



              • Re: How do we know a qvw is a good qvw?
                Steve Dark



                A more powerful machine will always help things.


                Optimizing your expressions is often the best way to boost performance, what is the calculation in the chart that takes a long time?


                You will get better performance at the front end if you have just a single table.  You could look at joining your Dimensions table to your Fact table to achieve this.


                There are lots of examples of ways you can optimize your documents in blogs and articles on-line.  My own blog has a number of such articles.






                  • Re: Re: How do we know a qvw is a good qvw?

                    Thanks Gentlemen for your comments.


                    The link between 2 tables are linked correctly.


                    40 columns for dimensions.

                    180 columns for facts.


                    Frequently used fields like months, years, are already joined into the fact tables.



                    We do a lot of rolling 12 months, and we use SET.

                    Is there other way to get rolling 12 months other than below?










                           MONTH_SEQ_PIV={'$(=MONTH_SEQ_CURRENT-13)'} >}LATE_ORDER))/






                    The whole qvw takes 18 secs to open, STAR schema.

                    After turning into 1 fact, 1 dimension, 18 secs. is this long for 4million? or could be better?


                    The hour glass is much shorter when loading charts (around 1 sec),

                    but i am trying to totally eliminate it, but i'm not sure if this is realistic?

                    Do you often see hour glass in your qvw?




                    Some questions...


                    1. Another thing I notice is by improving data schema, doesn't mean the opening time of qvw in QV DESKTOP will improve? I notice it's almost the same time, before and after. I used to have STAR schema, but i tried to merge all of them into just 1 fact 1 dimension, it works better in the front end when users make some interactions, but not significantly.

                    2. Does section access slow down the opening of qvw? I removed the section access, and i saved almost 7 secs. if yes, is there any way i could improve it?

                    3. some general question, i'd like to gather.
                      • what is the biggest qvw you have ever handled?
                      • how long does it open in QV DESKTOP.
                      • what kind of data schema are you using (in general)?
                      • what is the server model and RAM size used?
                      • How do i benchmark the opening time vs size?
                      • Do you often see hour glass in your qvw? how do you make sure the time is acceptable?




                      • Re: How do we know a qvw is a good qvw?
                        Rob Wunderlich

                        Gysbert published an excellent paper on rolling period analysis. I can't find it but Gysbert is on this thread so he can post a link.


                        Let me address just one of your questions, opening time. Opening time is made up of four components.

                        1. IO -- time to read from Disk.

                        2. De-compression. CPU time to decompress if the QVW was stored compressed (default).

                        3. Section Access dynamic data reduction.

                        4. Initial chart calculation.


                        As I said before, 4M is not a very big QV app, but you do have a significant number of columns. My first step would be to determine if you need all those columns. Use Document Analyzer (Downloads - Rob Wunderlich Qlikview Consulting) to identify unused fields and drop them if possible.


                        Some QVWs take a long time to de-compress. You may want to try it with compression=none (Document Properties, General pane) to see if that speeds things up.


                        In the server environment, you can nuetrialze the effects of steps 1 & 2 by using document Pre-Load option.


                        Section Access reduction can take time. In my experience, if selecting a large amount of values, the time can be reduced by selecting integer values (eg autonumber) vs string values.


                        I would work on those items first and then use the chart calc time data in Sheet Properties to determine how much time is attributable to chart calc.



                        • Re: How do we know a qvw is a good qvw?

                          Qlik Ranger wrote:


                          Thanks Gentlemen for your comments.


                          The link between 2 tables are linked correctly.


                          40 columns for dimensions.

                          180 columns for facts.


                          Frequently used fields like months, years, are already joined into the fact tables.

                          Ranger, so do you have 40 columns you are joining your both tables on? Or is it just one common column in both tables and the rest 39 are in the dimension table? This is important for Qlikview performance. In other words do you have any synthetic keys in your Qlikview schema?

                            • Re: How do we know a qvw is a good qvw?

                              Thanks Rob for your advice.


                              Klicki, we can imagine in the schema of having 2 boxes and one line.

                              No sync, no circular.

                              I also saw your question in another post why not joining them in single table.

                              it's because i have a lot of columns, and it might take up a lot of memory.

                              thats why i'm keep one fact one dimension.


                              But i'm really interested to know, in general, how long does QV desktop to load a 2tb line of qvw or 4millions rows of qvw? I need something as a benchmark. i understand it varies from the data model, the expression. etc. but i just wanted to have a general idea as i dont have a lot of big qvw.



                                • Re: How do we know a qvw is a good qvw?
                                  Rob Wunderlich

                                  In general, my 4M row apps would never show an hourglass.


                                  You must make sure that you have enough RAM on the desktop machine to not use virtual storage. Look at the Working Set size of qv.exe task in Windows Task Manager to see how much memory is required for your doc.



                                    • Re: How do we know a qvw is a good qvw?

                                      Hi Rob,


                                      When optimising the qvw, i'm working directly on the production server, so that i know the actual performance.

                                      I have 64 gb RAM, 10 cores.



                                      • Re: How do we know a qvw is a good qvw?
                                        Adam Barrie-Smith

                                        I agree.  The QVW doesn't sound like it should be slow. How best is best is a tough question.  I think comments made so far give a good strategy.


                                        To this I would add using the Memory Statistics (Doc Properties/General tab) to understand what is happening in your virtual database.  When you have a fact with 100+ columns you can easily experience problems.  In recent QVWs I have seen any single field in the QVW that is larger than 200Mb will impact negatively on performance.  If the big field is a key I autonumber() it and if it is data I look at breaking it into 2 fields.


                                        When you analyse and discover that you really do need all 180 columns then you should try identify the most used columns for the initial analysis by a user.  Keep those columns in the main table but drop out the rest into a secondary table.


                                        I have only ever done this on QVWs larger than 4Gb - saved uncompressed.  Good luck, Adam

                              • Re: How do we know a qvw is a good qvw?

                                As has been said before, 4 million rows is not a lot, but in truth, it's not the number of rows, but the amount of data. The content of the fields matters.  Do you have any particularly long fields?  How unique is your data?  If you have long comment fields, for example, they will not be compressed in RAM. Here are some of the things to look at:


                                1. Size of the data model in RAM.  Be sure there is ample RAM available for the data model, sessions, and cache.  It doesn't sound to me like the app is paging, but make sure.  Remove any fields that aren't needed.  If you can, truncate or remove long text fields (i.e., comment type fields).  They don't compress and are slow to handle in QlikView.
                                2. Working Set Limits.  Much of the efficiency of QlikView occurs when there are many caches objects (calculated objects).  When the RAM is (relatively) full, that's a good thing because it means that there will be a higher cache hit rate for all users.  This means fewer objects will have to be freshly calculated so QlikView will run more quickly.  However, when the applications, sessions and cache in RAM hit the Working Set Low (default setting is 70% of RAM), QlikView will start removing sessions, objects and cache from RAM.  If this happens then QlikView will have to recalculate objects more often - there will be a lower cache hit rate - and QlikView will be slower.  If you have 64 GB of RAM I would experiment with setting the Working Set High to about 92-93% and the Working Set Low to about 80-84%.   That should make the maximum amount of RAM available to QlikView.  If QlikView is hitting the WSL limit then you need to add RAM.
                                3. Objects and Expressions.  Check the calculation time for each object on each page.  Know which of your objects/expressions take the longest and explore how you can fix that, i.e., calculate values in the script, use more efficient expressions, make sure your not using single threaded operations in your expressions, etc.


                                Hope this helps a bit.