Skip to main content
Announcements
Qlik Connect 2024! Seize endless possibilities! LEARN MORE
cancel
Showing results for 
Search instead for 
Did you mean: 
markp201
Creator III
Creator III

Hyper-normal data models...

I wanted to get the communities thoughts on using a hyper-normal data model with Qlikview.

Specifically

1. Scalability & Performance

2. Script maintenance

Hyper-normal models will often have 2-3 times the number of tables and associations but most of the tables are bridges or factless facts.  Here is a link to a 20 minute video.

Pros & Cons of Hyper Normalized Data Models for Data Warehouses - YouTube

1 Solution

Accepted Solutions
marcus_sommer

Many of the problems from traditional databases / BI tools don't exists within qlik. Normally you don't need to worry very much about the table-relations or the normalization of the data and you could simply load your tables and associate them with a key-field (of course avoiding synthetic keys and circular loops).

Beside them the hyper-normal data-model is quite common within qlik and will be there most often called linktable-model (often not strictly between all tables but between the main fact-tables). They are quite logical but they need some efforts to build them and they run into performance issues by larger datasets. Easier and more performant is it often to create a datamodel in a star-scheme.

Conclusion: I wouldn't recommend a hyper-normal data-model and personal I use parts of them only in one or two single cases.

- Marcus

View solution in original post

2 Replies
marcus_sommer

Many of the problems from traditional databases / BI tools don't exists within qlik. Normally you don't need to worry very much about the table-relations or the normalization of the data and you could simply load your tables and associate them with a key-field (of course avoiding synthetic keys and circular loops).

Beside them the hyper-normal data-model is quite common within qlik and will be there most often called linktable-model (often not strictly between all tables but between the main fact-tables). They are quite logical but they need some efforts to build them and they run into performance issues by larger datasets. Easier and more performant is it often to create a datamodel in a star-scheme.

Conclusion: I wouldn't recommend a hyper-normal data-model and personal I use parts of them only in one or two single cases.

- Marcus

markp201
Creator III
Creator III
Author

Thanks Marcus.  The reasoning here is they claim changes to the operational data have a minimum impact on the maintenance of the data model.  The problem is they are quite difficult to document and map back to the operational sources because entities with similar structure will be combined.  It's all very confusing.

I would agree - a star schema is the best solution.