Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 
Anonymous
Not applicable

Performance issue - High CPU for heavy document

Hi!

Recently I've been having a lot of performance issue on our Qlikview Server.

The CPU goes up to 100% for calculations in our heavy documents (there are 2 documents with ~700m rows and a lot of distinct counts). So every time a user opens one of these documents, the CPU goes up to around 100% (which is to be expected) when calculating and then goes back down again. When several users are using these documents at the same time, the CPU stays flat at 100%.

The start page of the most used heavy document contains of a container with 7 charts in it, some of the charts have distinct counts. There are no initial selections in the document. Is this really the best way to do it for performance? Is it better to use hidden objects?

The server has the following specs:

Windows Server 2008 R2 Enterprise

192 GB ram

Intel Xeon CPU X5650 @ 2 GHz (2 processors)

I understand that it's difficult to give any feedback on this little information but I'm just wondering if there's any smart way of rebuilding my app to make it easier on the server? Is there any log file that I could provide you with to give you more information?

Thanks in advance!

8 Replies
marcus_sommer

I think at first you should check if all data will be really needed, very helpful is here the document analyzer from Rob Wunderlich: Tools | Qlikview Cookbook. Also removing or minimizing high cardinalitiy fields The Importance Of Being Distinct and using from autonumber() as keys could make a big impact, see how it worked: Symbol Tables and Bit-Stuffed Pointers.

Further to minimize the complexity from calculations with precalculated flags within the script Customized flags and avoiding von aggr-dimensions and if-loops (replacing needed checks with set analysis) will help a lot - and I would set opening selections maybe the current year (and month).

- Marcus

kuczynska
Creator III
Creator III

I would say you have to think about the slightly changing some of your logic in the script - how many tables your document has? how many of them is loaded straight form DB and how many is loaded from QVDs using optimized loads?  would you consider separating your script from front end by using binary load?

next questions - what are your keys between the tables? try to keep them in numeric format, not text, it should help straight away. Some changes to your data model may be beneficial too - are you able to join some of your tables or apply some additional mappings to decrease the number of tables in your model? Is your model in star scheme or snowflake?

another option - drop all unused fields and data. If you are loading historical records, that your users don't want to see - don't bring them in.

My last question - are you using calculated dimensions in your charts? If so - are you able to move it to the script?

It's very hard to determine what could possibly be optimized in your code without seeing the qvw...

Anonymous
Not applicable
Author

The data is actually pre-aggregated so I only load a data table and a calendar, so there's really no logic in the scripts.

I've tried dropping fields that are not used. I've also tried looking into the sheet properties > objects and to see the calculation time of my objects but all the calculation time is located to one object which is only a text box.

It takes me around 5 minutes to open the document in the desktop client - is that normal? The document is around 15 gb and takes up about 50 gb ram when it's opened.

marcus_sommer

The differences in size between app and RAM meant that you are using compression within the app - in tab general from document properties - such compression needs a lot of calculation time, you should disable it.

- Marcus

Anonymous
Not applicable
Author

So that will basically make my app alot bigger? Is there any downsides to that?

marcus_sommer

Yes, you will need more storage space but this is the only disadvantage - try it, you will see not only more speed to open the app it will be also save time to store the app.

- Marcus

Anonymous
Not applicable
Author

Sorry for late response.

Will the compression setting affect the use of the document in the accesspoint?

A couple of more questions:

  • Right now I have a cyclic group dimension on the dashboard where date is preselected, would it be better for the performance to change so that yearmonth is standard when you open the document?
  • I don't have any selections made when the document is opened, is that bad? Should I change so that the current year is selected if most of the users are interested in the current year?
  • Is it more efficient to use hidden objects instead of a container?
marcus_sommer

Compression had no impact of the use of applications in the access point - it influenced only the time which will be meeded to load the application into the RAM.

The lesser the number of rows are which need to be displayed in an object the faster will be the calculation an yes I would set yearmonth to the top of the cyclic-group and preselect (by updating) or by opening the current year (and sometimes the current month, too) - I do this with the most of my applications.

From a performance point of view it's not really important if an object is hidden in a container, hidden by variables on the visibility, minimized or on another sheet - they won't be calculated (unless some exceptions like calculations within the title and maybe some more) - this is more layout- or usability-related.

Another point to save calculation times is to set calculation-conditions on objects (tab general) to force users to set certain selections maybe like this:

if((match(getcurrentfield(YourPeriodCyclicGroup), 'date', 'week') and count(distinct yearmonth) <= 6) or

(match(getcurrentfield(YourPeriodCyclicGroup), 'month', 'yearmonth') and count(distinct yearmonth) <= 18) or

(match(getcurrentfield(YourPeriodCyclicGroup), 'year') and count(distinct yearmonth) <= 36), true(), false())

- Marcus