Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Problem:
Question:
Seeking insights, best practices, or guidance on implementing this user-based data reduction approach. Your advice would be highly appreciated. Thank you!
It sounds like you may be in the market for ODAG.
Have you considered that approach?
Beside reducing the dataset on authorized data per section access this feature could be also used in regard to performance aspects, for example providing only the current + last month/year for the normal users and more periods for the power users.
Another possibility which often fits to the user-requirements is to use a mixed granularity - maybe with current + last month on a atomic level and the last year on a daily level and the data before on a year-month level.
Further keep an eye on the cardinality of the used fields and removing record-id's and splitting timestamps into dates and times - and also by similar fields. In the end the file-size may become much smaller. Also worth to review the UI approaches and avoiding aggr() and (nested) if-loops and interrecord-functions.
Absolutely, ODAG seems like the way to go for fixing this,ut here's the thing: our current version only lets 10 people use the ODAG App at once for some reason. But before we give up on ODAG, we hope to try a different approach.
Picture this: on the front end, users pick a year, and then the backend only serves up data from that year.
on the other hand, we've got to keep all those data models from the current app. So basically, using the setup of the current app, we just need to restructure the data every year. This should help speed up app opening times.
I've come across ChatGPT which provides the following suggestion:
Further investigation is needed on how to establish a connection between this variable in the front end. This way, the selected year can be transmitted to this point. Once achieved, it's possible that the data can be effectively filtered based on the chosen year.
For very basic topics tools like ChatGPT may give some ideas but in a bit advanced scenarios it's rather not a big help else easily the opposite. I don't want to say that ChatGPT couldn't return better results but not the answers are important else the asked questions - and if you are capable to ask the necessary questions you don't need it anymore. It's an illusion that these tools could solve challenges - in the best case we could use them like the old-fashion wizards to save some manually writing work ...
You could set up a copy of the app with the year preloaded (one copy per year). The maintenance on that could be annoying as any changes would need to be made to multiple apps, or you'd need to re-copy every time. Presumably, only the latest app would need to actually be reloaded (assuming data doesn't change retroactively). This is basically the same as using ODAG except you pre-generate the app, and as I said, each individual app would still likely be large (whereas with ODAG you could potentially further reduce it by filtering on more than just the year).
ChatGPT's "solution" doesn't seem helpful to me, poor performance aside.
In addition to all of the above, I'd definitely look into some basic performance tuning steps such as using QSDA Pro and analyzing the app for performance:
- What fields are using most of the memory? Can they be eliminated or modified in order to reduce cardinality?
- What data model is being used? Link Table may not be the best answer here, for example.
- Can key fields be reduced by using AutoNumber?
- Can timestamps and amounts be reduced by rounding to the necessary level of precision?
There are many performance tuning techniques that can be applied to reduce the size of the app, before implementing ODAG.
I'm teaching Performance Tuning at the Masters Summit for Qlik. If you haven't been yet, I'd highly recommend, for managing an app like this. We are coming to Orlando and to Dublin this fall - check if you can join us!