Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 
krumbein
Partner - Contributor III
Partner - Contributor III

Making sense of QIX engine log

We are not happy with the UI performance of an installation of a QlikView Server 12.3 SR4

We have thus activated the QIX engine log according to
https://help.qlik.com/en-US/qlikview/November2018/Subsystems/Server/Content/QV_Server/QlikView-Serve...

And I have read what the fields are supposed to mean and I am roughly familiar with the basic anatomy of a calculation. But I am still having trouble making sense of all the data. Especially the different times, that are given and the different method calls. Can anyone point me towards a reason, that would help? I have googled, of course, but couldn't come up with anything

By far the biggest time is spent in method called "Graph::GetChartLayout"


As that can't possibly be the layouting the browser (we are talking about the Ajax client), I can only guess what that might be. The calculation of the min and max axis values? (We are talking about a linear gauge chart)

Thanks!

 

 

Labels (2)
2 Solutions

Accepted Solutions
marcus_sommer

Yes, I meant this posting. In the end it provides some ideas, examples and explanations but there is no general faster/slower else it depends always on the used datamodel and you really need to test them against eachother (whereby the efforts to test it are rather small - at least with just a few objects).

In regard to the variables it should be always just a text-replacing but I'm not so sure if it's always working especially by referencing to variables which are calculated on the outside of the object like a max(Date) or sum(TotalOfValue) and/or various conditions to show/hide or trigger something ... which may cause to trigger another/further calculations ...

If possible I would suggest to install the desktop client on the server to check the performance there. With it you could exclude that any server-setting might be missing or unsuitable - maybe any timeouts, restrictions in object-RAM, caching, multi/single-threading, (extended) logging (especially for the web-server or for auditing) and similar or if there are any server-objects (shared-file) or any delaying on the network-side (firewall, proxy, ...). Probably there are even more points which might go wrong with the server ...

- Marcus

View solution in original post

Brett_Bleess
Former Employee
Former Employee

One key thing to keep in mind here is that in 11.20 the QIX engine was row-based, but in the 12 tracks that changed to Column-based, so that may be the difference you have been searching for here to explain the change in timing of the calculations.  I just wanted to point this out to you to see if that potentially helped explain things.  The QVW file is still row-based, but when the file is loaded into memory, it is converted to column-based, that is why you see a decent jump in memory usage in the 12 track on opening, as there are basically two copies of the app open initially, it is row-based when first opened, then a copy is made converting things to column-based, and then the row-based model is dropped completely just leaving the column-based model in memory.  Hopefully this may help a bit more.

Regards,
Brett

To help users find verified answers, please do not forget to use the "Accept as Solution" button on any post(s) that helped you resolve your problem or question.
I now work a compressed schedule, Tuesday, Wednesday and Thursday, so those will be the days I will reply to any follow-up posts.

View solution in original post

13 Replies
marcus_sommer

Until now I haven't worked with this log and therefore I couldn't say which entry means what exactly. If your assuming is right that "Graph::GetChartLayout" hasn't anything to do with the web-rendering of the object it might point to the usually heaviest calculation-part of an object which is the creating of the virtual table underneath the object on which the real calculations are performed.

Before the QIX was introduced this process was completely single-threaded and gathering various fields from multiple tables which might be rather unfavourable associated (for example by link-tables) could need some time. With QIX came optimizations but I doubt that this process is now fully multi-threaded and even if it will probably remain one of the biggest caculation-parts.

Do you have any comparing of this application in an earlier release? Means not to look for any failures with your installation/release without taking a look on the used datamodel, the data(quality) and the expressions which may prevent a more responsive behaviour.

Beside this you may get (more) valuable information about what happens in this application by looking on the mem-files:

Obtaining-Memory-Statistics-for-a-QlikView-Document 

Recipe-for-a-Memory-Statistics-analysis 

- Marcus

 

krumbein
Partner - Contributor III
Partner - Contributor III
Author

Hello Marcus and thank you for your response

We do have experience with the application unter 11.2 (und thus the old QlikView engine). The dashboard we are talking about never ran at blazing speed, but it is noticeably slower now. Let's say to the tune of 15 Secs before vs 50 Secs now

That you mentioned  the QIX engine, made me search in a slightly different way and I found this introduction to it.

https://developer.qlik.com/knowledge/tutorial/engineTutorial

Here is a quote from the chapter 'Meet the engine'
"A layout is the result of running a Generic Object properties definition through the QIX Engine."

My interpretation is, that GetGraphLayout() thus is "doing the calculations". Why they called it layout is beyond me, but I am still hopeful, that internally there are good reasons for it 😉

The memory profile doesn't seem suspicious to me. All objects use very little memory. What happens on the way to the results though, I can't find any hints on

Sandro

marcus_sommer

It's a quite comprehensive tutorial so it would need some time to go through it carefully - and I'm not sure that's detailed and explaining enough to understand it clearly what's why happening ...

Nevertheless I could imagine that the there mentioned aspects are the essential parts of the extension from the QIX against the older engine. Means in this case the need to create a virtual table on which the calculations could be performed is further there and not much changed in the main-logic. But it got a bit different and extended structure because now there exists an API for it to exchange the information with other tools. Quite probably has these API an additionally overhead to be able to query/instruct the engine appropriate. Further I think there are now more features included in regard to the sorting and/or highlighting the results or something similar - maybe a bit in the way like a pre-rendering of the results.

And all of that might have lead to use different terms like objects, layout and properties. In conclusion I could imagine it as a similar effect like MS Excel and the switch from biff to ooxml (from 2003 to 2007) which reduced the performance significantly.

I assume my deducing on the engine-behaviour is not much of use in your case and therefore here some rather old-school hints to detect the real reasons and/or some more practically dependencies/relations between your data/objects and the performance. And this is removing step by step sheets/objects, dimensions/expressions and data and to monitor the performance directly and per log-file. I wouldn't be surprised if it's only one or two objects/expressions which slowdown the performance and which could be probably optimized in some ways.

- Marcus  

krumbein
Partner - Contributor III
Partner - Contributor III
Author

You are right, your deductions don't help much, but your time and effort are nevertheless much appreciated 🙂

For anybody, who is interested, I have found another version of the QIX Engine tutorial.
http://opensrc.axisgroup.com/tutorials/engine/108.%20Creating%20Charts%20with%20HyperCubes.html
It isn't as visually confined as the other one and the animations actually work (they did not work for me on the other one).

I have done a lot of testing now and have come to a few conclusions:
1. The values provided by GetObjectCalcTime() can be very misleading, if objects are calculated in parallel. And even if one object is shown, calculated and  hidden again, one at a time, I am not convinced of their reliability. hic wrote at some point, that this function is from another era (before QIX?) and thus to be taken with a grain of salt.

2. A more reliable, albeit less granular, measurement of performance might be to simply take the time from selection made until the UI is idle again. 

3. The time it takes to update the state vectors of the model can play a significant role in overall response time. I think we can determine that by clearing the cache, making a selection on an empty sheet and then waiting for the UI to be idle again. Even though no UI objects are calculated, it can take some time to settle.

4. I believe to be aware of most general performance guidelines (minimize no of distinct values, count distinct isn't that bad, minimize hops and the possible need to build temporary tables for calculations etc.) and how the QIX engine roughly operates (first update state vectors, apply set analysis to those, maybe build temporay table if necessary etc.).  but I have to say it is really difficult to predict how a certain change affects performance. 

Two things I have read about, but not found to be true are:

  • Myth 1: numbers as keys are more performant in the UI than text
    • doesn't seem to make a bit of a difference
    • and it makes sense as well, the state vector calculations are probably done on the bit stuffed pointers and not the values
  • Myth 2: number comparisons in set analysis are more performant
    • e.g. WeNeedThisFlagNum = {1} is faster than WeNeedThisFlagText = {Y}
    • if that is true at all, it might be for fields with a lot of distinct values
    • the size of the fact table (with that flag) doesn't seem to matter
    • again, my assumption is that the comparison is done on the field values and the rest happens on the bit stuffed pointers

Overall I haven't come far. I have not found any major ideas for improvements in the data models or expressions through further research and testing. And even apps, which usually have a response time of around a second, sometimes take a looooong time to update (say 10 or 15 seconds). Not sure how to investigate further.

I am also finding it irritating, that it isn't clear visually, when the objects are done calculating. In QV 11.2 they visually invalidated on selection and then came back once they are done. That doesn't seem to be the case in QV 12.3. Especially with long response times (and updated values coming online not at once, but sequentially) that is pretty confusing.

Just a little rant... I have strayed quite far away from the subject under which the discussion started 😉

Sandro

marcus_sommer

I think you are right with the most of your observations especially the mentioned myth's.

In regard to your mentioned flags it might be worth a trial to use them as multiplicator instead of as a set analysis condition, means:

sum(Value * Flag) instead of sum({< Flag = {1}>} Value)

I could remember a posting discussing the pro and cons but don't find it in the moment.

Another point which wasn't mentioned here but might here commonly in use are variables - the expressions are in (nested) variables with or without parameters and/or refers to other variables? Maybe there is now something different as before and it breaks a multi-threading processing and is now executed in single-threading - and delayed the calculation. If so you could do some trials by replacing the variables with real expressions and/or some fixed values.

Further is the behaviour respectivelythe calculation time similar in both the server (open per desktop client, ajax and IE plugin) and the desktop client or are there any differences?

- Marcus

krumbein
Partner - Contributor III
Partner - Contributor III
Author

Update on the UI feedback
I think I was slightly mistaken on that one. In the Desktop it's missing, in the Ajax client it's there. Still confusing, though

Flag as multiplier
I think I know which thread you mean. This one? 
I do understand the implied reasoning: If don't use the flag in set analysis, there are no adjustments to be made to the state vector and the calculations can start right away. And despite more values being multiplied and added, that can be faster updating the state vector first.

I would have sworn I have already tried it, but I did it again. In my starkly simplied test app it yields tremendous improvements, to the tune of 600ms down from 3.300ms response time. The field in question is only set in that one type of data in the concatenated fact table and null for all the others.

sum(TypFlagThatIWant * DataPointThatIWantToSum)

is really the same as

sum(DataPointThatIWantToSum)

but is still that much faster than

sum({<TypFlagThatIWant = {1}>} DataPointThatIWantToSum)

 I don't understand why...

I nevertheless tried it in the real world app. Here it doesn't perform any better. Not using the flag in the set analysis is more performant than using it though.

Regarding the variables
I make extensive use of variables for reusability, easy maintainance and hopefully using the cache better. My understanding is though, that what needs to be done on those is only text processing. I switched them out for an explizit formula and it didn't change anything.

IE Client vs. Ajax vs. Desktop
Ajax and IE Client is about the same.
Localhost vs. another client is about the same.
Those two against Desktop is not a fair comparison, as the Desktop client isn't installed on the server. About the same as well though. But more consistent, there aren't those exceptions where it just takes forever. 

My feeling is, that there is a part, which could be improved by optimization on the QIX engine and that there is also something wrong with the server. It is running on IIS, instead of QVWS. Maybe we'll try switching to QVWS (even though in theory it shouldn't matter)

marcus_sommer

Yes, I meant this posting. In the end it provides some ideas, examples and explanations but there is no general faster/slower else it depends always on the used datamodel and you really need to test them against eachother (whereby the efforts to test it are rather small - at least with just a few objects).

In regard to the variables it should be always just a text-replacing but I'm not so sure if it's always working especially by referencing to variables which are calculated on the outside of the object like a max(Date) or sum(TotalOfValue) and/or various conditions to show/hide or trigger something ... which may cause to trigger another/further calculations ...

If possible I would suggest to install the desktop client on the server to check the performance there. With it you could exclude that any server-setting might be missing or unsuitable - maybe any timeouts, restrictions in object-RAM, caching, multi/single-threading, (extended) logging (especially for the web-server or for auditing) and similar or if there are any server-objects (shared-file) or any delaying on the network-side (firewall, proxy, ...). Probably there are even more points which might go wrong with the server ...

- Marcus

Brett_Bleess
Former Employee
Former Employee

One key thing to keep in mind here is that in 11.20 the QIX engine was row-based, but in the 12 tracks that changed to Column-based, so that may be the difference you have been searching for here to explain the change in timing of the calculations.  I just wanted to point this out to you to see if that potentially helped explain things.  The QVW file is still row-based, but when the file is loaded into memory, it is converted to column-based, that is why you see a decent jump in memory usage in the 12 track on opening, as there are basically two copies of the app open initially, it is row-based when first opened, then a copy is made converting things to column-based, and then the row-based model is dropped completely just leaving the column-based model in memory.  Hopefully this may help a bit more.

Regards,
Brett

To help users find verified answers, please do not forget to use the "Accept as Solution" button on any post(s) that helped you resolve your problem or question.
I now work a compressed schedule, Tuesday, Wednesday and Thursday, so those will be the days I will reply to any follow-up posts.
krumbein
Partner - Contributor III
Partner - Contributor III
Author

My work on this has been on the backburner a bit, as there were functional requirements, that needed implementation. I have tested things on the side though and one things seems to have made a big difference. Big enough for users to notice

I have completely turned OFF the Qix Performance Log 😄

It had been on Level 5 all the time, since I had turned it on for debugging.

I guess by default it is off, right? So not sure why there was an issue in the first place, but it seems to not only create a lot of data quickly, but also slow down the system. It is writing to a local SSD, of course.

I'll keep an eye on it and update you again, when the final verdict is out.