Qlik usually compresses detail data (more repetitive) than summary data. The compression can vary a lot but i usally find the higher the volume the greater the compression. The best thing to do is to load the data into a QVW and assess what look at how big the QVW is on disk. If you do this you can skip the compression ratio aspect of the calcs, otherwise you will have to guess compression ratio to figure out how much the QVW file is on disk.
Once you have the QVW size multiply that by 4 (conservative) and that is the RAM footprint of the app. For each concurrent user add another 10%.
if you are truly facing 200 concurrent users i will say that that is very high for any BI application regardless on the technology and its rare. In this case you would need multiple QV servers to host the app and spread the RAM (and core) requirements over multiple servers.
Each server would need enough RAM to host the app, but the RAM demands for concurrency would be distributed.
Direct Discovery and loop&reduce with doc chaining may be a more efficient solution .