Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 
Not applicable

Qlik Sense and Direct Discovery - problem with large database and keys

Hi,

I have a large database I'm trying to load into Qlik Sense with Direct Discovery but I've got problems loading it in. I'm getting out-of-memory problems.

I don't have any measures in my table so I'm loading everything as dimensions. Most of my fields only have a limited number of of values (around the low 100's). The exception is the key. I have a unique identifier for every row (the key). I have well over 100 million rows. I'm getting out of memory problems when I load the key as a dimension.

Regardless of the amount of memory I have on my machine, I can see this as a problem for extremely large tables. I have another table which is much bigger. If I try and read in a unique key for every row, I'm going to run out of memory (translation : buy more memory/get another machine isn't a solution).

Should I be reading in keys as dimensions? If not, how should I handle keys with Direct Discovery?

LIB CONNECT TO 'RDS database';

Trace Table name:

[Table name]:

DIRECT QUERY

dimension

    key_id,

    small_key_id_other_table,

    small_key_id_other_other_table,

    text_value_field

FROM `table`.`table_name`;

Thanks,

Mike

4 Replies
Gysbert_Wassenaar

Yes, only the fields that are keys that link the data to the other tables in the qlik sense data model should be defined as dimensions. Otherwise you're simply loading all the data into qlik sense and might as well not be using direct query.


talk is cheap, supply exceeds demand
Not applicable
Author

I've looked into this problem in more depth.

My answer is: don't use Qlik Sense in this way.

My key field is really large. There's no way round loading it in. I can't change the database structure. Because I want to run this on an AMI, I have to watch the cost. There's no way I'm paying for a more expensive AMI instance than I have to.

My solution is to use SQL to pre-process the data and to do some of my BI work for me. I'll then use Qlik Sense on these smaller pre-processed data sets.

It would have been nice to have used Qlik Sense in place of SQL, but that looks like it's not going to happen.

Not applicable
Author

Hi Mike,

Are you using Qlik Sense Desktop or Qlik Sense Server? If you are using Server can you please let me know the spec of your server. I think I may encounter similar issue to you in the near future and am wondering how much of a factor machine resources are here.

Kind Regards,

Footsie

Not applicable
Author

Hi Footsie,

I'm using Qlik Sense Desktop for my testing.

I wanted to understand the spec of the AMI I needed before renting it. This is similar to understanding the server spec you would need.

At least on the desktop side, part of the problem is the 'ungraceful' failure mechanism. I was watching the memory consumption go up and up till it consumed 100% of memory. After this point, Windows started to report out of memory problems and Qlik SenseDesktop crashed. My point? Qlik Sense didn't spot the problem and take action.

Looking at the pattern of memory consumption for smaller tables I tried to read in, it looks like the whole data set is read in and then compression is done. If I'm right, then to read in the key(s) from a large table you would need sufficient memory to read in the entire key uncompressed. This is substantially increases the memory needed.

Maybe someone from Qlik could comment if this is correct?

Mike