I have a question I have thought over for some time. How does an uneven table affect QV. Let´s says I have an order and an invoice table. Some data are equal, and some data are unique for each table.
Let´s say I want to concatenate the tables by some reason instead of just create a link. It will look like this.
|Header 1||Header 2||Header 3||Header 4||Header 5||Header 6||Header 7||Header 8|
Will this affect the performance in some way compared to if I only link the tables and if so how?
Let´s say in this case we have orderno and orderrow as common columns but that´s not important here. This is about what effect an uneven table has on QV if any.
Thanks for your answer.
The blog post answers the question whether to use a link table or concatenation. Not exactly what I was after but still informative. As the post states concatenation is more effective regarding resources than a link table but what if I only haver two tables and I don´t have to create one more table to link all together?
The article was written 2013 when we where working in QV 11 and before the QIX engine. There are one major difference here. While the old engine was row-based the QIX engine is columnar. Something I guess will affect the answer on how the performance will be affected.
Still a valid answer, thx.
got your point ...if you have uneven tables with primary key as null values then yes it will have an impact on the performance .......other than connecting key if your has null then it might on the Association
I think qlikview linking will be more appropriate in this case. When you will make the concatenation between both tables null records will have performance impact.
AFAIK there are only minor changes between the engines of QV 11 and the new QIX engine of QV 12 and Qlik Sense and that especially the way how Qlik stored the data (in symbol- and data-tables) didn't change significantely.
Therefore I think the suggestions given from 2013 are still valid. Further the Qlik NULL handling was always a quite advantageous one from a performance point of view and that you normally don't need to worry about NULL in your tables (unless in some particularly scenarious and/or you are on the limits of your environment).
This meant it's quite common and rather best practice just to concatenate tables if you need to merge them. It's a very easy and fast approach and will quite often produce the correct results which is not so easy to get by each other kind of datamodelling and most of them will be poorer from a performance point of view.
If I had a similar case I would check if I could just associate both tables with a combined key over all key-fields. If this isn't approriate I would concatenate them in the above mentioned way and only if this is really too slow I would join and/or map both tables by ensuring with additionally load-steps that really each key-value is considered (there might be missing ones on both sides).