Qlik Community

QlikView App Development

Discussion Board for collaboration related to QlikView App Development.

Announcements
QlikWorld 2020: Join us May 11 - 14, 2020 in Phoenix, AZ. Register early and save $400. Learn More
diwakarnahata
Contributor

Performance Optimization with high data volumes

Hi Qlikers,

I am facing a strange issue.

In order to do performance optimization of a heavy application, we have split the Bigger App (5 GB) into smaller Apps (~1.5-2.0 GB each), over Markets. We have market as a single select in the original Bigger App.

However, we do not see any significant performance improvements in the smaller apps even though the applications are significantly small in size.

Does it mean that doing Market selection in Bigger App is same as reducing the App on market? In other words, are we saving on any computations by reducing the App on Markets as compared to single selecting the market?

Also, can we conclude that splitting an App will not optimize the app performance, rather one should focus on having more columns as single select?

Regards,

Diwakar

1 Solution

Accepted Solutions
Highlighted
MVP
MVP

Re: Performance Optimization with high data volumes

Please have a look at

Logical Inference and Aggregations

The Calculation Engine

Your market selection is part of the first step, logical inference, which is already highly optimized to return the record set for the following calculation engine.

If you are working with the single market applications in parallel or you have a lot of free memory, then there probably won't be a big performance gain by just splitting your large application into several market applications.

View solution in original post

3 Replies

Re: Performance Optimization with high data volumes

I don't think splitting is the good solution. You should first focus onto the reducing data volume of your application. Look to optimize your data model. try to optimize your front end expression.

MVP & Luminary
MVP & Luminary

Re: Performance Optimization with high data volumes

Generally I would agree that there is no performance improvement (other than document open time) in reducing apps by Market vs requiring a single Market selection in a single app. 

-Rob

Highlighted
MVP
MVP

Re: Performance Optimization with high data volumes

Please have a look at

Logical Inference and Aggregations

The Calculation Engine

Your market selection is part of the first step, logical inference, which is already highly optimized to return the record set for the following calculation engine.

If you are working with the single market applications in parallel or you have a lot of free memory, then there probably won't be a big performance gain by just splitting your large application into several market applications.

View solution in original post