It are not really many information - just to look at the number of records or the size of the rawdata is often not very meaningful how to handle and to process them within Qlik.
Most important is the datamodel which should be developed in the direction of a star-scheme. Of course only mandatory needed records and fields should be loaded and be without unnecessary formattings and further high cardinality fields like a timestamp should be splitted into dates and times and similar. Only if you have done this more or less roughly you will be able to estimate which resources are needed and if it would be useful to add further measures like a mixed granularity, various flags, certain pre-calculations within the script and so on.
Beside this you will probably need a multi-tier data architecture to implement one or several layers of incremental loadings.