Skip to main content
Announcements
Qlik Connect 2024! Seize endless possibilities! LEARN MORE
cancel
Showing results for 
Search instead for 
Did you mean: 
Not applicable

Where to do data modeling in Qlikview with HANA as its database


Hi,

In my project, I am using Qlikview 11.2 version with SAP HANA 1.0.72 as its data base. For some reasons, I am not using direct discovery approach.

When I do data modeling,what is the best approach out of the below two approaches:

1) Do the data modeling in HANA and create a view. Load the view in Qlikview and store it in qvd. and use qvd for creating the qvw reports.

2) Load the HANA tables in to Qlikview and create qvd's for Facts and Dimensions tables. With qvd's as input,Perform data modeling in another qvw report. and use this qvw report as input for developing the actual reports.

I have to implement currency conversion logic as well. And in my reports, I have a currency filter for user selection.

I would like to know, Performance wise which approach is best. As far as I understand, both have in memory, both the approaches would have same performance.

Please suggest.

Thanks

Padma

4 Replies
Anonymous
Not applicable
Author

QlikView always performs best using it's native .qvd files as it's source. As for both of your approaches, it really depends on what environment and in what order you would prefer to do your data modeling in. Extracting the needed tables from HANA and storing them in .qvd's would help to expedite future data transformations for other projects if you're modeling in QlikView.

sathishkumar_go
Partner - Specialist
Partner - Specialist

Hi Padma,

I will suggest, Create the view in HANA and load that view in qlikview and store into QVD.

After that use the QVD's to create the data modelling in qlikview.

Regards

Sathish

JonnyPoole
Employee
Employee

If you do the modelling in HANA you will be able to use those new elements for other 'customers' of the database. Whether those customers are raw data customers or other BI presentation tools it can be reused. 

But if the purpose of the model is ONLY for presentation in QlikView then its faster to do the model in QLIK and ongoing maintenance and updates will also be quicker. Your business customers may not like it if it takes a long time to make changes or deploy.

Also worth noting that no matter what you do in HANA or a source database, over time there is likely to be updates that you make only in the qlikview layer so... regardless of how you prepare the data you are likely to have additional tweaks in QlikView.  One of the main reasons is that Qlik uses an associative model with no possible loops, different to other models that may have conformed dimensions with possible loops etc...

lastly, users will have no performance gain as a result of the extra modelling that you do  in HANA, unless you are using a direct discovery solution. This is because Qlik's in-memory model takes user requests off the database altogether.

Not applicable
Author

Sri,

The best place to transform and "model" your data is usually your data warehouse.  This gives you a single central repository for all your data and you can use it for Direct Discovery or SQL select loads to your applications.

While I am a huge proponent  and user of the Qlikview ETL model when you don't have a data warehouse or to use with data not in your data warehouse, the enterprise data warehouse needs to be leveraged.

Yes, QVD's do load super fast and are great, but can take time to extract the data, transform it, then manage a set of operational qvd's along with your data warehouse data.

Qlikview scripting will always win out as far as flexibility and speed of doing the ETL versus  a data warehouse but it can get convoluted and create many versions of the truth in your QVD repository

Bottom line is to always go with he process that takes the least amount of time to refresh accurate and auditable data to your applications.

Hope this helps

BTW - I am currently doing a POC with Direct Discovery and Hana.  I love the concept and the process is really fast with the client, but dogs down on the access point.  Also with no set analysis, direct discovery is tough to justify or use for the robust UI's we are used to creating.