Skip to main content
Woohoo! Qlik Community has won “Best in Class Community” in the 2024 Khoros Kudos awards!
Announcements
Nov. 20th, Qlik Insider - Lakehouses: Driving the Future of Data & AI - PICK A SESSION
cancel
Showing results for 
Search instead for 
Did you mean: 
ioannagr
Creator III
Creator III

how to handle huge tables in qlik sense

Hello all,

 

So I have 8 tables of  a total of 0,5 Terra  (also causing ram issues )which are all needed for an app.

I'm confident many of you have come across this problem, how have you handled it?

What can I do?

 

Thank you in advance,

Ioanna

5 Replies
Jonathan_Dienst
Partner - Contributor II
Partner - Contributor II

I think you have 3 options:

  • Increase your RAM
  • Filter your data (only lat month rather than full year, for example)
  • Aggregate the data during load so you have fewer data points

You could also go through the fields and remove any not needed.

ioannagr
Creator III
Creator III
Author

Hi @Jonathan_Dienst , thanks for replying.

Can you explain bullet number 3 for me?

 

Do you think I should use Hadoop or Spark?

I have witnessed Giga Tables crushing, I dont know what I should do with Terras 😕

 

Jonathan_Dienst
Partner - Contributor II
Partner - Contributor II

Aggregate using Sum(), Max(), Avg() etc to have (for example monthly data instead of daily ).

Another bullet: Make sure any joins have properly aligned key field(s) between the two tables

marcus_sommer

It are not really many information - just to look at the number of records or the size of the rawdata is often not very meaningful how to handle and to process them within Qlik.

Most important is the datamodel which should be developed in the direction of a star-scheme. Of course only mandatory needed records and fields should be loaded and be without unnecessary formattings and further high cardinality fields like a timestamp should be splitted into dates and times and similar. Only if you have done this more or less roughly you will be able to estimate which resources are needed and if it would be useful to add further measures like a mixed granularity, various flags, certain pre-calculations within the script and so on.

Beside this you will probably need a multi-tier data architecture to implement one or several layers of incremental loadings.

- Marcus

ioannagr
Creator III
Creator III
Author

I see thank you everyone. Very insightful information by all of you!