If you want to distribute files for offline use, you should definitely use Loop and Reduce. Then the files will only contain data that the user is allowed to see.
If you keep the file(s) on the server, it is not a clear-cut choice. Section Access is usually the better choice: Only one file; less memory usage if the same data should be seen by many users (overlapping data domains). Loop and reduce is however better if you want to place the files for different security groups in different directory branches.
I understand from the guys at your scalability department that having multiple small documents can often be better performing than one large document. The idea being that most users will not be connected at the same time so if you have a reasonably short document timeout (versus the 8 hour default), documents can load and unload from memory all the time. As they are all relatively small, they will load quite fast.
I understand that this works even better in a cluster environment - as is deployed for Swedbank.
What you say is absolutely right. A large document takes time to load and evaluates slower (calculations after a click). If you want to load the document when the user needs it and unload it soon after it's been used, then you should definitely chop the data up in smaller chunks - use Loop and reduce.
But if you want the large document to stay loaded all the time, you should in many cases still go for the Section Access solution. The load time is not a problem (use pre-load), and the evaluation time is usually faster than the load time for a small document, which would be the alternative.