Do you mean that you want to load redundant data because the forecast-results of two weeks ago could be different to the actual forecast-results or something similar? If yes and if the amounts of data are really huge you could reduce the amount of data if you only load the offsets of the data and not the full-data and to calculate the historical data from them.
But I'm not sure that this will be quite easy to develop - it's rather a worst-case scenario. Therefore I suggest you just load all your data (of course only the really needed fields and trying to optimize high cardinality fields: The Importance Of Being Distinct) and to look how big your application will be. Qlik stores data different to text-files or excel-files so that file-size won't increase rather linear like in your calculation else they will be probably much lesser because Qlik stored only distinct values and used bit-stuffed pointer to connect the symbol-tables and the data-tables: Symbol Tables and Bit-Stuffed Pointers - a deeper look behind the scenes.
I wouldn't be surprised if it run fast enough on this way.