My understanding is that QV's ability to run in RAM is that the data model is stored in an ultra-normalised state, such that each field is only ever stored once, so that each field effectively acts as a key in it's own table (though this is hidden from the user in the table view).
With this in mind, I tend to treat the structure of the data model as a means to represent and redefine levels of granularity, than for performance. I typcially start with one Fact and some dimensions linked off it, then concatenate onto this Fact if I find common dimensions from different transaction/Fact tables. Once these different tables don't have many common dimensions, or once I find that new transaction tables have a different level of granularity I tend to rejig the structure to put a link table in.
In a sales application for example, quotes, invoices and orders, that are all cast at the row level of granuality, would get concatentated into one Fact. If however Accounts receiveables are added on, cast at the level of Invoice header, I would pull the row level transactions out into the three original tables and then use a link table to link through to the relevant invoice.
Hopefully this makes sense. I'd love to hear if there are performance concerns, but I've not found them in my biggest model yet (a stock application with 4 million rows). One thing to watch out for on the performance tests would be not switching from one side of a data model to the other. Sounds like you're minimising this by not inluding data from different areas of the model in the same chart, but I suspect even asking QV to recaluclate different charts that take in different sets of the model might cause it to stutter a bit.