Skip to main content
Announcements
Qlik Connect 2024! Seize endless possibilities! LEARN MORE
cancel
Showing results for 
Search instead for 
Did you mean: 
tribangen
Contributor
Contributor

Best practices (data warehouse+Qlik)

Hi,

I’m a BA for the Data Warehouse and the Qlik Apps for our company. I just recently switched companies and at my new company, they try to solve almost everything with Qlik Apps, but in all my classes and/or certificate programs I was taught that the front end tool should be ‘dumb’. The data logic should be in the data warehouse. The consultants for Qlik think otherwise and try to transform in qvd. What is the best practice in your company?

Labels (1)
2 Replies
hic
Former Employee
Former Employee

If you want all logic and all calculations to take place in the data warehouse, then you lose most advantages with Qlik. So I wouldn't recommend that.

Qlik's analysis paradigm is to NOT use the DW, but instead to load all data in-memory in the Qlik server, and perform all logic and make all calculations there. The result is

* an extremely fast logical inference engine
* greater freedom in how to make selections
* no pre-calculated cubes
* no double counting (which you often get with joins in a DW)
* correct handling of fan traps and chasm traps
* possibility to solve advanced data modeling, e.g. hierarchies and slowly changing dimensions

Lemac
Contributor III
Contributor III

In our company there are two streams: There are the Analysis, where there is interaction with the data; and there is the Dashboarding, which has no interaction with the data. (There is also Reporting, but that is handled by NPrinting).

 

Analysis: That should be rich datamodels, with plenty of freedom. But on the other hand, if you load too much data in the server, the app becomes slow and irresponsive. 

As a best practice, I try to limit my tables in the front end to max. 7 linked tables. If it becomes much more, it becomes hard to understand and to administer. Try it yourself: A small datamodel of 4 tables is far easier to understand than a datamodel which has 8 tables. Therefore I try to add data in the backend which reduces the number of tables in the front end. For example: If I add the 'Product Table', just because I want to show the Product Name... I am likely to add the 'Product Name' to the Sales Table in the back end. That way my datamodel on the front end becomes simpler and easier to understand.

By making the front-end dumb; you limit the capacity of users to interact with the data, to explore, to ask themselves questions and verify it with the data. 

By making the front-end too complicated you put the users at risk of making the wrong assumptions, selecting the wrong dimensions. E.g. If you have an Order Date, a Productin Date, a Delivery Date and a Payment Date in the same model, users are prone to selecting the wrong date when verifying their assumptions. 

Dashboarding: Here the key is uptime. I have some complex dashboards which, when refreshed every minute take up to 20 seconds to load. In these dashboards I have moved all intelligence to the back-end. That backend calculates all tables, prepares them and the front-end just loads the tables and displays the graphs. Although they process a lot of data, the table used by the front-end is really limited. The key is that you are Dashboarding here, so there is no interaction with the data, and you lose nothing by not showing the data.