Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 
Not applicable

How do we know a qvw is a good qvw?

Hi,

I'm wondering how do we benchmark the performance of qvw?

I have 4 millions rows, in 2 flat tables with sequential numbers as keys to link the 2 tables.

one for fact, one for dimensions.

When loading the chart with set analysis, i still see the hour glass. i wanted to remove the hour glass totally.

Does this mean going on a more powerful machine would help?

Could someone share your experience?

Thanks.

13 Replies
rwunderlich
Partner Ambassador/MVP
Partner Ambassador/MVP

In general, my 4M row apps would never show an hourglass.

You must make sure that you have enough RAM on the desktop machine to not use virtual storage. Look at the Working Set size of qv.exe task in Windows Task Manager to see how much memory is required for your doc.

-Rob

Not applicable
Author

Hi Rob,

When optimising the qvw, i'm working directly on the production server, so that i know the actual performance.

I have 64 gb RAM, 10 cores.

Thanks,

mr_barriesmith
Partner - Creator
Partner - Creator

I agree.  The QVW doesn't sound like it should be slow. How best is best is a tough question.  I think comments made so far give a good strategy.

To this I would add using the Memory Statistics (Doc Properties/General tab) to understand what is happening in your virtual database.  When you have a fact with 100+ columns you can easily experience problems.  In recent QVWs I have seen any single field in the QVW that is larger than 200Mb will impact negatively on performance.  If the big field is a key I autonumber() it and if it is data I look at breaking it into 2 fields.

When you analyse and discover that you really do need all 180 columns then you should try identify the most used columns for the initial analysis by a user.  Keep those columns in the main table but drop out the rest into a secondary table.

I have only ever done this on QVWs larger than 4Gb - saved uncompressed.  Good luck, Adam

Not applicable
Author

As has been said before, 4 million rows is not a lot, but in truth, it's not the number of rows, but the amount of data. The content of the fields matters.  Do you have any particularly long fields?  How unique is your data?  If you have long comment fields, for example, they will not be compressed in RAM. Here are some of the things to look at:

  1. Size of the data model in RAM.  Be sure there is ample RAM available for the data model, sessions, and cache.  It doesn't sound to me like the app is paging, but make sure.  Remove any fields that aren't needed.  If you can, truncate or remove long text fields (i.e., comment type fields).  They don't compress and are slow to handle in QlikView.
  2. Working Set Limits.  Much of the efficiency of QlikView occurs when there are many caches objects (calculated objects).  When the RAM is (relatively) full, that's a good thing because it means that there will be a higher cache hit rate for all users.  This means fewer objects will have to be freshly calculated so QlikView will run more quickly.  However, when the applications, sessions and cache in RAM hit the Working Set Low (default setting is 70% of RAM), QlikView will start removing sessions, objects and cache from RAM.  If this happens then QlikView will have to recalculate objects more often - there will be a lower cache hit rate - and QlikView will be slower.  If you have 64 GB of RAM I would experiment with setting the Working Set High to about 92-93% and the Working Set Low to about 80-84%.   That should make the maximum amount of RAM available to QlikView.  If QlikView is hitting the WSL limit then you need to add RAM.
  3. Objects and Expressions.  Check the calculation time for each object on each page.  Know which of your objects/expressions take the longest and explore how you can fix that, i.e., calculate values in the script, use more efficient expressions, make sure your not using single threaded operations in your expressions, etc.

Hope this helps a bit.