If you want to display these data and/or to calculate anything with them you will need to load the data. But it doesn't mean that you mandatory need to load all data every time else you could use incremental approaches and/or to slice the data down to the really needed records and fields and/or to consolidate them to the needed granularity.
Also 90 gb of raw-data doesn't mean that a qlik application will also need 90 gb. Depending on the data-structure and especially the amounts of distinct field-values it could be much lesser - often you need only 10% - 20% of the raw-data size.
You must load data into the app for it to process anything.
As Marcus said you can do incremental loads so that you are not loading the total 90GB each time. You also would want to load your raw data into a QVD file and then use that QVD as your data source. This is a very efficient file structure for Qlik.
Do as much data manipulation as you can when creating the QVD so that you don't need to do it in the app. There are things that you can do to limit the amount of data and indexing required.
A clean data model is always important.
Qlik Sense can handle 90GB of data if you follow best practices.
compressed 90GB and incremental load would give you effective solution; but if you insist on not loading the data, there's still Direct Discovery: Accessing large data sets with Direct Discovery ‒ Qlik Sense