Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Qlik Open Lakehouse is Now Generally Available! Discover the key highlights and partner resources here.
cancel
Showing results for 
Search instead for 
Did you mean: 
Ranjanac
Contributor III
Contributor III

Data Modelling - Doubts ; Required Guidance

Hi All,

I am new to qliksesnse, below are the list of questions, that i have doubts after trying. Could you please guide.

1. let suppose we've multiple files, having duplicate values, now i want to load all the files into one.
Can we use link table for that ? or, is there any other way ?  Attaching QVF for the same .

2. I have 2 tables having the same structure. Now i want to remove the duplicate values and load both the tables into one. How to do that ? Attaching QVF for the same .

3. When I have multiple Fact tables in QlikView, it can be handled in 2 ways, by using
concatenate or by using Link tables. Can you please help me to understand with an example.

Thanks & Regards,

Ranjana

Labels (3)
2 Replies
brunobertels
Master
Master

Hi 

 

For 1 : 

try this with JOIN

Sales:
LOAD * INLINE [
StoreID, ProductID, Sales, BudgetQty, BudgetValue
1, 1, 5, 90%, 50
1, 2, 6, 50%, 47
2, 1, 5, 95%, 41
2, 2, 4, 20%, 27
];

Profit:
join LOAD * INLINE [
StoreID, ProductID, Profit, BudgetQty, BudgetValue
1, 1, 5, 90%, 50
1, 2, 6, 50%, 47
2, 1, 5, 95%, 41
2, 2, 4, 20%, 27
];

Budget:
join LOAD * INLINE [
StoreID, ProductID, Budget%, BudgetQty, BudgetValue
1, 1, 5, 90%, 50
1, 2, 6, 50%, 47
2, 1, 5, 95%, 41
2, 2, 4, 20%, 27
];

marcus_sommer

Everything in Qlik should start with a star-scheme data-model which means having a single (vertically and/or horizontally) merged fact-table (field-names and data-structures as harmonized as possible) and n surrounding dimension-tables.

There may a lot of challenges to match everything but all this work needs to be done independently of the data-model. In the end there may scenarios in which it's suitable to extend the star-scheme or very rarely to replace it with another data-model but nothing is so simple and fast developed as a star-scheme to validate data and logic and creating the first views.

Beside this are duplicates not mandatory an error else it could be valide data. Removing duplicates might a simple load distinct are doing. For a bit more complex scenarios you will need any unique identifier to flag them or removing them - very powerful would be exists() in this matter.

I suggest you just starts with playing with some sub-sets and concatenating the facts - and then step by step applying all harmonizing/cleaning/preparing tasks.