Qlik Community

Ask a Question

Qlik Design Blog

All about product and Qlik solutions: scripting, data modeling, visual design, extensions, best practices, etc.

QlikWorld Online 2021, May 10-12: Our Free, Virtual, Global Event REGISTER TODAY

The QlikView internal logic enables a data model with several associated tables. It not only allows – it encourages you to use several tables when building a data model.


This is very different from many other BI or query tools where, when several tables are used, they are all are joined together into one table. The most obvious example of this difference is a simple SELECT statement. With it, you can use several tables as input and join them, but the output is always one single, denormalized table.




With QlikView, in contrast, you can have a multi-table relational data model that is evaluated in real-time. The associations are evaluated as joins at the moment when the user makes a selection in the application. At the same time, all objects, some with complex calculations based on these joins, are recalculated.




When creating the QlikView data model, you have a choice of loading the tables as several entities or joining some of them together. Joining in the script means that the result of the join is stored in the QlikView data model as one single table.


So what should you do? Is it better to keep the data model normalized (many tables) or is it better to de-normalize (fewer tables)?


My view is that it usually is better to keep the data model as normalized as possible. A normalized model has many advantages:


  • It is memory efficient.
    It is, by definition, the data model that uses least memory.
  • It is CPU efficient.
    In most cases, QlikView calculations in a normalized model are as efficient - or only marginally slower - as in a denormalized model. In some cases the normalized model is faster.
  • It is easier to understand and manage.
    It should be possible for other developers to read your script: A simple script with as few transformations as possible, is a script that is easy for other developers to understand and maintain.
  • It minimizes the risk for incorrect calculations.
    Joins potentially change the number of records in the tables, which means that a normal Sum() or Count() function cannot always be used – they would sometimes return an incorrect result. You may counter that there is always a way to write a correct formula, but my point is that it should also be easy. Expressions in server objects will be written by users that do not have special knowledge about the data model in the app.


But it is not a clear-cut case.


Often there is a trade-off between memory efficiency and CPU efficiency. In other words, there are cases where you can decrease response time by letting the data model use more memory; where performance will be better if you make the join in the script.




One such case is if you have two very large fact tables, like Order Headers and Order Details. An other is if you have chart expressions containing fields from different tables. Then QlikView has to perform the join in memory generating a virtual table over which the summation will be made. This can be both memory and CPU demanding, so you might get a better performance if you have made the join already in the script. But the difference is sometimes only marginal. You need to test, to be sure.


Bottom line is that you’ll have to weigh pros and cons. Don’t join unless you have to. If performance is important and you experience a noticeable improvement when you join, then you probably should join. But ask yourself what the implications are. Is the script still manageable? Can a user understand how the formula should be written?


The best join is often the one that never is made. Often – but not always.




See more about this topic in the Technical Brief about Joins and Lookups

Not applicable

Very interesting article. I have been using this but often the challenge is when you have to handle multiple 'facts' in your data model. So, a related query is 'is concatenation always recommended when handling multiple facts with diff levels of detail'? Sometimes building a link table is more intuitive  keeping the facts separated and visible!


Glad you like it. My view on multiple fact table is that they should be concatenated. A link table will work also, but is not as efficient. The different levels of detail can be solved and I am in fact working on an article on exactly this problem. So stay tuned - you'll hear more about this.


PS see more on Fact Table with Mixed Granularity

Not applicable

Dear Henric,

very interesting article.

Myself am trying to avoid Joins, because I find them pretty confusing - and dangerous.

Indeed reading your entry triggered a thought about KEEP and performances.

Let say I have a Table1, Order Headers and a Table2, Order Lines.

What you judge to be more effective performance-wise, between these 2 options:

1. filtering with a WHERE statement just orders of year 2012 in Table1 and then LEFT KEEP Table2

2. filtering with a WHERE statement just orders of year 2012 in Table1 and again filtering with a WHERE statement just orders of year 2012 in Table2, given that appropriate field is available.

In the second case, I see a double benefit, as data are filtered right from the sources (so there is no need for checking against Table1 when selecting from Table2) and further there won't be any real-time valuation when a selection is made in Qlikview.

I see the only drawback as a little less flexibility, as if it is wanted to include also orders of 2011, I would need to change the script for the two tables, rather than just for Table1 (and as I type, I realise that probably a variable at the beginning of the script can be set to determine the date, so all in all there isn't any single drawback).

What is your thought on this? (sorry for deviating from the original topic)

thank you!


There are several ways to do this. All three approaches below will work, and which you should choose depends on how much memory you have. Or how fast your DB can perform a join.

1) Using Left Keep:

          SQL SELECT ... FROM OrderHeaders WHERE Year = 2012 ;

          Left Keep SQL SELECT ... FROM OrderLines ;

This is fast but could be very memory consuming: QlikView will load the entire OrderLines before it reduces it to the proper size.

2) Using where Exists:

          SQL SELECT ... FROM OrderHeaders WHERE Year = 2012 ;

          Load * where Exists(OrderID); SQL SELECT ... FROM OrderLines ;

This is not as fast, but on the other hand it uses a minimal amount of memory: QlikView will load only the correct records from OrderLines.

3) Using a join:

          SQL SELECT ... FROM OrderHeaders WHERE Year = 2012 ;

          SQL SELECT ... FROM OrderLines

             INNER JOIN OrderHeaders ON OrderHeaders.OrderID=OrderLines.OrderID

             WHERE OrderHeaders.Year = 2012;

Whether this is fast or not, depends on your DB. However, QlikView uses a minimal amount of memory. QlikView will load only the correct records from OrderLines.

Your thoughts on filtering at the source already is totally correct. Less data will be transferred to QlikView and this is probably good. But if the join is slow, the gain is lost...

I would probably try all three to see which is fastest.


Not applicable

Hi Henric.

As in this case I have a question.

If we keep everyting in normalized form without having much more joins.

Then there is possibility that it may increase number of hops.

I came to know its best practise to have lesser number of hops in QlikView.

So what's your opinion over here?

Will hops cause wrong calculations?


In most cases, QlikView performs very well also when you have many tables and then many "hops" in the calculations. So I don't agree that the best practice should be as small number of hops as possible.

For very large data sets there are however a couple of cases where you should be cautious. If the performance is bad you should consider changing the data model:

  • If you have several very large tables, like in the picture above with four tables. In such a case you could consider joining the two big tables together, so you get a star scheme instead of a snowflake scheme.
  • If you have calculations like Sum(x*y) where x and y are in different tables, then you should consider moving one of the two numbers so you get both numbers in one table.

Will hops cause wrong calculations? No – it is the other way around. If you join the tables together, then you may get wrong calculations.