Just interested in your thoughts on a best practice for importing data into Qlik.
Historically I have always worked on a Star Schema model in SQL Analysis Services, so the traditional Fact with Dimensions etc.
One thing I have noticed (more in QS, only just getting my hands on to QV) however, it appears that during the load process, I could write just a wide SQL query which has already joined on to the dimensions, or I could load each dimension as it's own table and then the fact, and leave Qlik to create the model.
I know in QS you can create dimensions on the fly within the app, so this could be used to work around the need to have separate dimension?
I prefer a multi-table data model, i.e. a snowflake, because this is by far what is easiest to manage and is least error-prone. Then you don't have to mess with any advanced SQL.
For performance, you could however claim that everything-in-one-table is a better (faster) solution. You should be aware though, that QlikView calculates aggregations in the source table. So by joining tables you may change the number of records, so that QlikView in effect could make an incorrect calculation.
An interesting topic that comes up in this area is whether or not to 'clean' the dimensions and facts on the load or to leave them 'as is'.
Many folks would like to clean all the data so that there won't be anything to question etc... but there is a raw value in loading data as is... seeing the oddities in the facts or dimensions that don't make sense at first glance but allow an analyst to find issues and discoveries about the information that only a straight (non-cleansed) load can afford. You might allow your users to find something valuable !
Anyways...something to think about as you load your star schema .