I work with a data-source that sounds very similar to yours, we use a single extract to loop over each database then each company within it, and then each table of interest and produce a QVD per each Database.Company.Table.
We can then do optimised QVD loads of that data as needed into a model, and do any modelling work etc as needed.
During the extract we use incremental loading so the first reload gets all data, and subsequent loads just get the latest data per company, based on a time stamp field in the table.
The help inside QV developer has some examples on incremental loading, and I'm sure there's lots here on the community too.
Hope that helps!