Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi all,
I have a challenge when working with Qlik Sense regarding the amount of data loaded in memory.
Most of the time my users work with data from the current year and the previous year. That's all I load in the load script, even though the data hub has data for several previous years.
Every now and then, a user will want to compare recent data to data way in the past (e.g. 5 years ago). Right now, it is not possible, because that data is in the DB but not in memory. Loading everything that is in the DB into Qlik is not feasible for performance reasons.
My ideal solution would be being able to run queries, or a piece of a script on demand. So if a user goes to a date for which there's no data available in memory, it is brought in at that moment.
Hoewever all I could see about this so far was related to on-demand app generation which is not exactly what I'm looking for. My scenario isn't having an overview and then letting users create detailed apps when they narrow down results. What I'm looking is for a way to bring old data when requested.
What would you be your approach for this?
Thanks much!
Juan
As your data source is a database, you could look into the direct discovery method.
See link to help ..
Before going in this direction I would check if the datamodel couldn't be more optimized - and often there is a significantly potential for it. A further method to reduce the size of the datamodel is to use a mixed granularity for the data - maybe to keep the current year / last n month on a transactional level, the previous time-frame to it on a daily level and the older ones on a monthly level - for the most business views this will be sufficient for the users.
- Marcus
Hi @marcus_sommer , that was actually my first thought, to aggregate old data. However, in the concrete case I'm dealing with, I'm going to need the whole granularity for the old data even if it's requested less often.
I'll have a look at what @Lisa_P suggested. Sounds like an approach that could suit my case.
Thank you both!
I'm not familiar with the direct discovery feature but AFAIK it's aimed to pull rather small datasets within the datamodel like some aggregated results or live data like the actual exchange-rate for currencies or similar stuff. Therefore I doubt that it would be very suitable for pulling really large datasets - and it might not only the Qlik-side which struggles else the database and the network would need to perform quite fast, too (would they, really ?). Nevertheless just give it a try.
Further alternatives might be some kind of document chaining - maybe with multiple applications - or also to trigger some (EDX) reloads ... But like already mentioned before thinking of anything like that I would ensure that the used datamodel is really optimized (IMO it's easier to optimize an application as bypassing the issues with such "external" measures and it's from principle a sustainable approach).
- Marcus
I tried the Direct Query but it is giving me an error whren trying the fields on a straight table: "There are too many rows to display this visualization".
Document chaining seems to be similar to On Demand Apps, which is what I'm going to try now. When reading about it, I don't like the idea of the users having to go from the selection app to the detail app to see the results for the data they selected. I think they might get confused. But still, I haven't been able to find an alternative so far.