Do you need you 22Go QVD to be analysed in the same time? I'am not sure of that! I've done bug application until 500 millions rows. That's not because QV is in-memory that you have to put all in memory ;-)
Consider incremental Load and split you QVD by period for instance then you will have flexibility for analyse and performance in the extract.
Think also to document chaining, I think not all user has the same needs so you can optimize the way the data is loaded/aggregated depending on user typology.
Other way is direct discovery, see 11.2 SR5 for latest improvement. So globally, you mount in the QV app a aggregated data set and you retrieve on the fly details row by direct sql in you database.