I doubt you'll be able to do this on this machine. I've tried to load ~ 400 million rows with a distinct key and this alone consumed about 8-10gig of RAM. Hence, 10 distinct char fields will be very tricky. As soon as you start getting rid of distinct values multiple 100 millions of records won't be a problem anymore (memorywise; QV only stores each value once and then uses bit-stuffed pointers to reference them in each record). What do you need 100 million distinct values for anyway?
I reckon your QV hanging is caused by the system starting to swap to disk -- Check your min and max memory values in QV.
I think this would be too much for your RAM. Beside thoughts if you really need all of these columns you should try to reduce the number of distinct values. Maybe with splitting from fields or replace from field-content with some logic - it's more difficult with strings as with numbers like here: The Importance Of Being Distinct - but with your ressources I believe you don't have options to a workaround like above mentioned.
I'm not sure there is an actual limit to what a QVD can contain, but more of a limit on the PC that is generating the table in RAM before it is able to store to disk.
Qlik support might be able to tell you more, or you could always create a massive Amazon cloud instance of QlikView to run this test.
Otherwise maybe look at segmenting the large QVD into smaller (but still large) chunks of data, maybe 10million rows each?
This could be useful when loading it back in as you might be able to select which files to load instead of loading the single huge QVD and filtering that for the data you want.