The QlikAppSize1 is larger and probably took longer to run, in this case, Qlik has to allocate memory to store 100,000,000 different ID values and one text1 column with a unique value of 'text1', while QlikAppSize2 ran faster because Qlik only has to allocate memory for two text columns ID with a value 'RowNo()' and text1 with a value 'text1'.
Qlik Sense optimizes memory allocation, if a value in your data happens more than once (like 'RowNo()' or 'text1' which happened 100000000 times) it keeps one copy of each one, so if we were able to look at the memory allocation for the second Application, we will find 'RowNo()' and 'text1' just once!
Hope this helps,
Hi Arnaldo, thanks so much for your help!
Your answer helps clean my question.
Previously, I thought that number (float/double) may take more memory than text (string).
When tested again with following loading, it even generates a Qlik app with bigger size.
My question is how to reduce/optimize the Qlik sense app size, if the table for Qlik sense has 100 million rows.
Text(RowNo()) as ID,
'text1' as text1
Qlik already has a very efficient size optimization algorithm, that will take care of many waste of memory. If you are dealing with very large tables, you still have options driven by the following line of thinking:
Does the Qlik Application really need those 100 million records? We could be talking about data expanding several years, device monitoring data (where the data is collected every 30 seconds or less); your options are:
- Summarize the data to reduce its size to a more manageable size.
- Split the data in regions, state, countries, years, as long as doing so does not reduce the value of the data stored in a single QVD.
- I will carefully look at the columns present in the table, are all of them required? could they be split in two tables.
- If you are implementing a Qlik Sense Server, then, that server needs lots of memory, with the Qlik Sense architecture, users access the data via browsers with minimum impact in their workstations memory requirements, but if your business is dealing with such volume of data, consider a server with lots of memory.
- The AutonNumber idea is an option, but not when building QVDs, it is good on the Application Side of your solution, with the User Interface, Qlik suggests to use it with string columns being use to join tables.
- Your Application Size 1 test proved that if your 100 million rows have a unique primary key, it will consume near 180 Mb of disk space, I loaded the table on a 16Gb RAM workstation. We do have a 300+ million table -more than 10 years transactions, and lots of columns- that could be handled by a 32 Gb RAM workstation, we are not building QVD for that table as yet. We have options to split it in regions.
Hope this helps,