Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Recently I seem to have hit a data size limit on a large app. About 700 million records (500 million distinct) in a star schema doing simple counts and count distincts. The app behaves slowly and distinct counts on the dataset can take 10+ minutes.
Has anyone found a good way to ballpark max size?
Looking into direct discovery and dynamic app generation now, but still curious where the line should be drawn on app size, record limit, etc ?
Hi Tyler,
There is a brief discussion on this here How much data can QlikView handle? and here Large amount of data (120 million rows) ... anyone got experience?
Please take a look. Thanks.
Siva
Hi Tyler,
There is a brief discussion on this here How much data can QlikView handle? and here Large amount of data (120 million rows) ... anyone got experience?
Please take a look. Thanks.
Siva
There is no limitation for volume of data. And that you have 500 Distinct rows. But, Qlikview has good future like it reduces the DISTINCT rows while reloading (PS - i am assuming 500 should be unique). And then i worked with more than 10 Data sources at a time. The data volume i worked around 20 - 30 GB. In fact, I never face any performance issue while playing with Qlik. Off course, i even don't know how Qlik capable at this time.
Simple counts on a proper star-schema model should work quite fast and even by a rather small/slow environment the calculations should be a matter of seconds and not minutes. Therefore I assume that something is wrong within your app and/or your system hadn't enough RAM. Please provide a screenshot from the tableviewer and the used calculation and elaborate a bit more what you mean with distinct records or do you mean distinct field-values? The number of distinct fieldvalues should be as less as possible and often you could remove such high cardinality fields and/or split them into several fields.
- Marcus