Qlik Community

QlikView Deployment

Discussion Board for collaboration related to QlikView Deployment.

Not applicable

Hardware Specification Understanding

Hi Forums/Friends,

At Presnt I have followings:-

SourceData = 50GB

Concurrent Users = 200

I want give the Hardware specification for QlikView Enviroment Installation.

On google and surfing I had got the following information.

RAM = (RAMuser × No. users) + RAMinitial

Where

RAM?initial = QVWsizedisk × FileSizeMultiplier; this is the initial RAM footprint for any application

RAM?user =?RAMinitial × userRAMratio; this is the RAM each incremental user consumes

QVWsizedisk = SourceData × (1 - CompressionRatio); this is the size, on disk, of a

QlikView file

Assumptions:

userRAMratio: range between 1%–10%

FileSizeMultiplier: range between 2–10

CompressionRatio: range between 20%–90%

I am not able to understand the complete calculation clearly.

In furture the SourceData will be more the 50GB & Concurrent Users will be more then 200.

So, How to caculate the Hardware specification for next 5 Yr assuming every year the database will get increase by 10-15% and

User also 10-15%.

If any body can explain the above calculation step by step clearly then it will be great help for given

apporximate Hardware configration requirement.

Advance Thanks

SD

Labels (2)
2 Replies
Employee
Employee

Re: Hardware Specification Understanding

Qlik usually compresses detail data (more repetitive) than summary data.  The compression can vary a lot but i usally find the higher the volume the greater the compression. The best thing to do is to load the data into a QVW and assess what look at how big the QVW is on disk.  If you do this you can skip the compression ratio aspect of the calcs, otherwise you will have to guess compression ratio to figure out how much the QVW file is on disk.

Once you have the QVW size multiply that by 4 (conservative) and that is the RAM footprint of the app.  For each concurrent user add another 10%.  

if you are truly facing 200 concurrent users i will say that that is very high for any BI application regardless on the technology and its rare.  In this case you would  need multiple QV servers to host the app and spread the RAM (and core) requirements over multiple servers.

Each server would need enough RAM to host the app, but the RAM demands for concurrency would be distributed.

Direct Discovery  and loop&reduce with doc chaining may be a more efficient solution .

Community Browser