Skip to main content
Announcements
Accelerate Your Success: Fuel your data and AI journey with the right services, delivered by our experts. Learn More
cancel
Showing results for 
Search instead for 
Did you mean: 
MS90
Creator
Creator

How To handle Large datasets without loading data and without creating QVD

Is it possible to handle large datasets without loading data into application and also without creating QVD and we also don't have option for creating direct query from snowflake,databricks and azure 

5 Replies
marcus_sommer

I think you need to elaborate in more details what you mean with large data-sets and why it's not intended to apply any intermediate-storage layer (qvd, binary-load)?

MS90
Creator
Creator
Author

There are 500million record in dataset and we are trying to implement direct access to data base ,which ever record filter applied in visualisation the data should pick dynamically .

As the data is huge we didn't want to load the data into qlik app or don't want to use any intermediate layer

marcus_sommer

It's not really Qlik related else it depends mainly on the speed of your source and the network in regard to the complexity and size of the queried sub-set and the frequency how many calls may happens in a certain timespan and which response-times are acceptable if you could avoid an intermediate layer or not.

MS90
Creator
Creator
Author

Currently using Qvd's to pull the data and there are less users but due to high volume of data the application is slow ,so looking for alternate option like is it how to do SQL query in the script directly?

marcus_sommer

Nearly nothing is faster as an optimized qvd-load unless a binary load - neither in Qlik nor within any other tool.

By applying a multi-staged incremental load-process which just pulled the new/changed records from the sources and the elder ones from the qvd's and/or binary the update of 500 M of records is no big deal. AFAIK there is in regard to the performance and simplicity of the ETL nowhere an alternatively - and surely not with classical SQL.

I assume there is probably a lot of potential to optimize the load-process + data-model + UI design and before looking for alternate approaches you should ensure that the applied tool is properly used.