Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
The data size is big, the App is large, and the App opening time is SLOW. Section Access ensures performance navigating sheets is adequate, but the App opening time of 5 mins isn't ideal. What are the options ?
To provide some background, the source data is approaching 300 million transactions, and the transactions have been entered by approx. 1,000 different entities. Each entity needs to see their own data and no-one else's so an App has been created with Section Access. The App is approximately 10 GB in size. When an entity opens the App in takes approx. 5 mins to open, but after Section Access kicks in and reduces the data down to a single entity, the App performs adequately displaying visualisations in seconds. So the issue is solely the time it takes to open the App. Is there a way to improve the performance of opening the App for each entity ?
Please note :
1. the data model has been extensively tuned and trimmed e.g. it doesn't contain any redundant data or superfluous fields, auto numbers are in use, the model is optimised for Section Access etc.
2. the data model requires unit level data and can NOT be aggregated in the data model
3. smaller Apps have been created for tailored purposes, but a 'wide' and 'deep' App ('the App' discussed here) is required
4. the App takes a number of minutes to LOAD when restricted to a single entity, so on demand App generation isn't a viable option
5. the infrastructure has been optimised within reason
From what I understand, cache warming isn't an option since it doesn't really work with Section Access, since it would need to effectively replicate App openings by the 1,000 different entities, which isn't viable.
At this stage, the only viable option appears to be to split the App into multiple Apps, where users are divided into groups and given access to the App containing the subset of data relevant to them. This is obviously messy and increases maintenance.
Are there any other options ? Please help.
And more generally speaking, how compatible is Qlik Sense and large data ?
Testing has proven that section access users do NOT benefit from each other in relation to caching. Tests in sequential order and results were as follows :
User 1 - opens report - 5 mins
User 1 - opens report again - 10 seconds
User 2 - opens report - 5 mins
User 2 - opens report again - 10 seconds
In summary, user 2 did not benefit from the cache created by User 1.
User 1 and User 2 are restricted differently via section access.
It would be great if the load of 'all' data into memory - before section access kicks in - was cached and useable by all section access users, but unfortunately it doesn't work that way. I look forward to re-testing once I have the pre-load feature set-up to check if it caches differently.
If anyone has experience with the pre-load feature and how it caches for different section access users, I would love to hear your thoughts.
I recommend you use the QDE tool to determine more specifically what is going on during the open. It's a free tool and is easy to set up.
-Rob
Could you repeat this test - with a removed section access - to see if there happens in general a cache-sharing between the users?
With no section access,.
User 1 - opens app - 5 mins
User 1 - opens app again - 10 secs
User 2 - opens app - 10 secs
So yeah, the benefits of caching work fine for non section access users (as expected) but not for section access users.
Based on these findings, do you still think it's possible that the Preload App functionality (May 2024 version) will create a cache that all section access users benefit from ?
Sharing applications between users and using a common cache between them is a main-feature of the in-memory technology from Qlik. But I don't know if this is also the target-scenario by using section access respectively if it has some restrictions and/or may configurable in some way?
From a pure technically point of view it should be also possible with a section access because a section access is nearly the same as an UI selection - whereby a selection flags the parts of the data-set with TRUE/FALSE and the "classical" section access dropped the FALSE ones from the memory. Such data-removing might be replaced with a non-changeable LOCK against the reduction-fields.
So far it looked rather simple but it becomes more complex if an application might be refreshed and/or any API calls were possible or ... These things are surely controllable with appropriate configurations but then the security is depending on a proper configuration and that may a too weak aspect in regard to the risks and costs.
Sonja_Bauernfeind: please involve the R&D team to clarify the intended behaviour and the related configurations.
Thanks Marcus. Looking forward to hearing more about the intended behaviour and if it's configurable. If it is, that would be brilliant and solve my problem.