Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 
Not applicable

Out of Memory - 1.5GB Document

Hi,

I have a very strange bug with a document that I am rather confused by.

In its current state on production, the document has 110,500,000 rows with a document size of 1.5GB in size in a star schema. There are no synthetic keys. This runs perfectly on the server without an issue (14GB ram and 4 cores)/Server 2012 64x. As a full data set, this also runs perfectly on my local development machine in QV 11 SR8 (8 cores, 16GB ram)/Windows 8.1 64x.

To stop myself going crazy when Qlikview pauses/halts upon saving when I am working on the doc, I load a limited set of 1 million rows. This works perfectly until I start to increase the sample set, when I go to run 2 million rows+... a large straight table I have throws an exception of out of memory (see attached image). After some troubleshooting it appears to be throwing this exception on a simple SUM column.

In summary, the document works perfectly on 100+ million rows of data and perfectly on 1 million, but I am very confused why it is failing on the amounts in between. I can replicate this on multiple machines using the Qlikview Desktop Client. Any help or ideas are more than appreicated.

Kind regards,

-Chris

2 Replies
Peter_Cammaert
Partner - Champion III
Partner - Champion III

Either your machines need more RAM, or your straight table expression tries to calculate with unconnected fields (resulting in a cartesian product).

What is the uncompressed file size of your document (save with Settings->Document Properties->General->Save Format->Compression = None)?

Best,

Peter

Not applicable
Author

Hi Peter,

> Either your machines need more RAM, or your straight table expression tries to calculate with unconnected fields (resulting in a cartesian product).

Machine has over 10GB free of RAM with everything on my desktop and the full application running. The uncompressed version is 1.5GB (with 110 million rows) and 30mb with 2.5 million rows. I don't understand how under the same circumstances just with 40x's the data it works perfectly without a memory error.