Skip to main content
hic
Former Employee
Former Employee

 

In the QlikCommunity forum I have often seen people claim that you should minimize the number of hops in your Qlik data model in order to get the best performance.

I claim that this recommendation is not (always) correct.

In most cases, you do not need to minimize the number of hops since it affects performance only marginally. This post will try to explain when an additional table significantly will affect performance and when it will not.

The problem is which data model to choose:

Data Models Hops.png

 

The question is: Should you normalize and have many tables, with several hops between the dimension table and the fact table? Or should you join the tables to remove hops?

So, I ran a test where I measured the calculation time of a pivot table calculating a simple sum in a large fact table and using a low-cardinality dimension, while varying the number of hops between the two. The graph below shows the result. I ran two series of tests, one where the cardinality of the dimensional tables changed with a factor 10 for each table; and one where it changed with a factor 2.

CalcTime vs NoOfHops.png

 

You can clearly see that the performance is not affected at all by the number of hops – at least not between 0 and 3 hops.

By 4 hops, the calculation time in the 10x series however starts to increase slightly and by 5 hops it has increased a lot. But this is not due to the number of hops. Instead, it is the result of the primary dimension table (the dim table closest to the fact table) getting large: By 5 hops it has 100.000 records and can no longer be regarded as a small table.

To show this, I made a second test: I measured the calculation time of the same pivot table using a fix 3-table data model, varying the number of records in the intermediate table, but keeping the sizes of the other tables.

Data Model Intermediate table.png

 

In real life, this structure would correspond to a part of a more complex data model, e.g.

  • Facts  -  Products  -  Product Groups
  • Order Lines  -  Order Headers  -  Customers

The result of my measurement can be seen in the red bars below:

 

CalcTime vs NoOfRecords Join.png

 

The graph confirms that the size of the intermediate table is a sensitive point: If it has 10.000 records or less, its existence hardly affects performance. But if it is larger, you get a performance hit.

I also measured the calculation times after joining the intermediate table, first to the left with the fact table, and then to the right with the dimension table, to see if the calculation times decreased (blue and green bars). You can see that joining tables with 10.000 records or less, does not change the performance. But if you have larger tables, a join with the fact table may be a good idea.

Conclusions:

  • The number of hops does not always cause significant performance problems in the chart calculation. But a large intermediate table will.
  • If you have both a primary and a secondary dimension (e.g. Products and Product Groups), you should probably not join them. Leave the data model as a snowflake.
  • If you have the facts in two large tables (e.g. Order Lines and Order Headers), you should probably join them into one common transaction table.

HIC

 

PS. A couple of disclaimers:

  1. The above study only concerns the chart calculation time - which usually is the main part of the response time.
  2. If the expression inside your aggregation function contains fields from different tables, none of the above is true.
  3. Your data is different than mine. You may get slightly different results.

 

Further reading related to this topic:

To Join or not to Join

37 Comments
Marcio_Campestrini
Specialist
Specialist

Hi Henric

Great post. Thanks for share your thoughts.

Márcio

0 Likes
5,426 Views
datanibbler
Champion
Champion

Dito! Great post!

I would only like to add that, in a big company, where you are probably not always going to be the (only) one looking after QlikView, keeping the tables in your DataModel recognizable so one can see what comes from where and what goes into which chart may be equally important and you may have to renounce on optimizing performance in a way, in order to keep things understandable and manageable.

Best regards,

DataNibbler

0 Likes
5,426 Views
kuba_michalik
Partner - Specialist
Partner - Specialist

Very interesting, thanks!

Regarding the .2 disclaimer, my intuition is that having fields from different tables inside one aggregation function is (as a rule of thumb) a bad idea and a performance killer (as QlikView would have to conjure up a potentially huge "virtual join" before aggregating). Does that sound about OK ?

0 Likes
5,426 Views
ashfaq_haseeb
Champion III
Champion III

Nice Post Henric

0 Likes
5,426 Views
hic
Former Employee
Former Employee

Yes, it can be a performance killer for exactly the reason you mention.

But - I would still not rule it out altogether. The performance is often better than you'd expect, and sometimes it's a bad idea to move more fields to the fact table (which is the only alternative). So, my advice is to try it before you start optimizing.

HIC

5,426 Views
Not applicable

Hi Henric,

Great Post.

0 Likes
5,426 Views
jjordaan
Partner - Specialist
Partner - Specialist

Hi Henric,

Thanks for a nice and clear explanation

0 Likes
4,267 Views
mikkeltaylor
Creator II
Creator II

Hi Henric,

Very informative post as usual.  Where there is no impact to metrics we have seen a good performance boost by merging a large dimension table into a large fact table.

How would this work where you are summing a counter field in the fact table though?  For example we have a large "accounts" table joined to a large "Customers" table  (a customer can have 1 or more accounts).  Most charts show a count of customers, which is at the detail level of the dimension table. 

Regards,

Michael

0 Likes
4,267 Views
hic
Former Employee
Former Employee

A good performance boost when merging a large dimension table into a large fact table is expected. Keep doing that.

On the accounts table: I assume that there is a third table also, containing transactions (one account can have several transactions). If so, I would consider merging the account data into the transaction table, and using a Count(distinct AccountID) as measure. But again - it depends on how large the accounts table is.

HIC

4,267 Views
Not applicable

Great post Hic - thanks again

0 Likes
4,267 Views