Qlik Community

Ask a Question

QlikView App Dev

Discussion Board for collaboration related to QlikView App Development.

On May 18th at 10AM EDT we will answer your QlikView questions live. REGISTER
Showing results for 
Search instead for 
Did you mean: 
Contributor III
Contributor III

Data Model Performance

I was hoping to get some feedback on a data solution that will have ~50 million rows and 60 or so fields in total.  Basically looking at advantages between Star Schema (maybe with a snowflake or two) and keeping as much as possible in one big Fact table.  I attached a spreadsheet to hopefully make the questions more clear.


Tab “Long vs Wide”

  1. This is less of a question and more of a confirmation, as I think this is more straightforward.  Essentially we take a long table and turn it into a wide table in order to perform some calculations in the load script.  I would assume this helps significantly if the “original calculation” noted below is very commonly used.
    • Type D = Type A + Type B
    • Original Calculation - Sum( {Type=A,B} Measure1)
    • New Calculation – Sum(Meaure1_D) 

Tab “Dim Tables”

  1. Question here would mostly be around performance of splitting out dimension tables or adding on to the FACT table.  As I understand it, if memory wasn’t an issue, one giant table with all fields would work best from a processing speed perspective.  I think a dimension that has 3 or more levels of text may have to be split out with star schema for memory issues, and that makes sense. Question I would have is dimensions that aren’t filtered or referenced in expressions often, and only one or two levels (e.g. Code and Description) with a low degree of uniqueness?  Does splitting this out help anything, or is Qlik already sort of doing this on the back end anyway? 
  2. Essentially it seems like the example that I provided is manually recreating a similar process to how the Qlik engine is creating the Data Tables and Symbol tables in the back end.  So does it make more sense (assuming front-end performance is an issue and load script performance is not) to keep as much as possible in the fact table and let Qlik handle the compression on its own?


Thanks for any assistance you can provide.  Let me know if I can be more clear on either of these.

Labels (1)
2 Replies
Support (Former)
Support (Former)

Best I have is the following Design Blog post, but I am pretty sure there are also some prior posts related to this too, so if you did not search the Community, would recommend doing so, as you should be able to find some of those prior posts too.


Here is the base Design Blog URL in case you want to search further there:


Scalability area is another good place to search:


And lastly, may find something out on Rob W's Cookbook pages too:


Sorry I do not have anything better, you likely need to attach a QVW with data model etc. in order to get further input from others, trying to do this ahead of time can be challenging, better to put things together and then post that when you have it if something is slow etc., so folks can see what you have done with the data model and expressions etc. to comment on areas you may be able to improve things.


To help users find verified answers, please do not forget to use the "Accept as Solution" button on any post(s) that helped you resolve your problem or question.
I now work a compressed schedule, Tuesday, Wednesday and Thursday, so those will be the days I will reply to any follow-up posts.
Contributor III
Contributor III

Thanks for the pointers! I'm pretty sure it makes little sense to try to do Qlik's job for it and expect better results, but I'll look into those links and see if I can get further clarification.