Qlik Community

New to Qlik Sense

Discussion board where members can get started with Qlik Sense.

imtiaz_ullah
New Contributor II

Load Data into QS/QV - Best Practice

Hi all,

Just interested in your thoughts on a best practice for importing data into Qlik.

Historically I have always worked on a Star Schema model in SQL Analysis Services, so the traditional Fact with Dimensions etc.

One thing I have noticed (more in QS, only just getting my hands on to QV) however, it appears that during the load process, I could write just a wide SQL query which has already joined on to the dimensions, or I could load each dimension as it's own table and then the fact, and leave Qlik to create the model.

I know in QS you can create dimensions on the fly within the app, so this could be used to work around the need to have separate dimension?

Just interested in your thoughts really.

Thanks

Immy

Tags (3)
3 Replies
Employee
Employee

Re: Load Data into QS/QV - Best Practice

To some extent, it is a matter of taste.

I prefer a multi-table data model, i.e. a snowflake, because this is by far what is easiest to manage and is least error-prone. Then you don't have to mess with any advanced SQL.

For performance, you could however claim that everything-in-one-table is a better (faster) solution. You should be aware though, that QlikView calculates aggregations in the source table. So by joining tables you may change the number of records, so that QlikView in effect could make an incorrect calculation. 

See more on

To Join or not to Join

A Myth about the Number of Hops

HIC

Re: Load Data into QS/QV - Best Practice

This one may be of interest too: Don't join - use Applymap instead

In QS you can't create dimensions on the fly afaik. Unless you mean master dimensions.

You might also want to take a look at the data modeling section of the Qlik Sense online documentation: Introduction to DataModeling


talk is cheap, supply exceeds demand
Employee
Employee

Re: Load Data into QS/QV - Best Practice

An interesting topic that comes up in this area is whether or not to 'clean' the dimensions and facts on the load or to leave them 'as is'.

Many folks would like to clean all the data so that there won't be anything to question etc... but there is a raw value in loading data as is... seeing the oddities in the facts or dimensions that don't make sense at first glance but allow an analyst to find issues and discoveries about the information that only a straight (non-cleansed)  load can afford.  You might allow your users to find something valuable !

Anyways...something to think about as you load your star schema . 

Community Browser