Since the data model in Qlik and the way that Qlik stores its data is a lot different as the way data is stored in your (probably) relation database, a precise call is hard to make.
Things that have a lot of impact are (for example) the uniqueness of the data; are you all storing long unique names or mainly storing flag fields (0 or 1).
What you could do is load the data into a Qlik Sense app and try to figure out how much data that app takes in memory.
If you have a rough idea of that answer, getting the estimate about how much data it will take for 50 concurrent users is a matter of multiplying it by (100 - (50-1)*0.1). In other words: 10% of the base memory footprint of the app for every user past the first.
Note that 10% is the maximum per session; in reality this number usually is bit lower.
... and then comes the interesting part...
Now you have how much the app takes in memory, but this is not your answer. On top of this, you will need to take a bunch of memory for caching as this will have a strong positive effect on the speed of your application.
Unless you get a very detailed answer here on the community, I strongly suggest you contact your local Qlik office and ask if you can sit down with one of our pre-sales consultants to do some good estimating.
I wish you good luck with that one.
Your customer basically just told you:
"You are going to build this house, but before I show you the specs, you will first have to send me what it costs first".
Put a blindfold on, randomly smash your hand on the number section of your keyboard, send him that number, and pray you are correct?
The size can differ a lot based on the (uniqueness) of the data.
I don't have enough experience to give you an "average" number. Either call your customer and tell him that you need more information or else you can not help him or contact one or our offices and see if they can give you a guestimate but be clear to your customer it will be a very rough guess till you have more insight on his data.
I figured I should give you slightly more information to work with. Plus I happen to have a demo app that I normally use just to demo the loading speed of QVDs vs CSVs.
For the record (aka "Disclaimer"):
There are semi random number which do not have to have any correlation with the information you are looking for. They are only intended to get you convinced that the information currently supplied to you is too limited to give your customer a good estimate of the memory.
I have a data set that contains of 75 million rows of data in 3 columns. I did 3 tests. The only thing I changed every test was the number of aimed unique values. These are the results:
Aimed # unique records File size CSV File size QVD RAM estimate 1.000 1074mb 292mb 294mb 100.000 1513mb 514mb 492mb 10.000.000 1953mb 952mb 1194mb
Note that the RAM estimate is just plain RAM used. In other words: no selections have been done so there is no caching. The number will go up the moment you start making selections. If you estimate your RAM usage too low, there will not be room for caching and your end-users might experience a fast but slower then needed application (which results in less customer acceptance).
I suggest you contact your local Qlik office and ask them for assistance. They probably have experience in the branch your customer is in and can give you an estimate based on experience.