Skip to main content
Announcements
Live today at 11 AM ET. Get your questions about Qlik Connect answered, or just listen in. SIGN UP NOW
cancel
Showing results for 
Search instead for 
Did you mean: 
brunodec
Contributor III
Contributor III

Delete mapping tables to free memory

Hello,

Mapping tables are automatically deleted at the end of script execution. However is there a way to delete a mapping table during script. Would that be useful to free memory in case of large mapping tables only used once in the script ?

14 Replies
Peter_Cammaert
Partner - Champion III
Partner - Champion III

Hi Bruno,

As others already pointed out, you cannot delete a mapping table by using a statement.

Added to that, mapping tables seem only capable of growing in size. However, they do not autoconcatenate like other tables do. Ok, you might think, let's redefine them with fewer values to reduce their memory footprint. But that won't work either, as a mapping table will act exactly the same as a field symbol table (if you're familiar with the concept): during each successive load of the same mapping table, index values (1st column) that weren't already present in the mapping table will be added, while the others will be left alone.

We do have some luck: a mapping table contains just one entry for every index value. We can apply the field cardinality rule to mapping tables: the fewer cardinal values you store in a mapping table, the smaller the mapping table will be.

In short:

  • if you don't really need a large mapping table, don't use it.
  • If possible, reduce the number of distinct values in the index field to reduce the memory footprint.
  • If all optimisation tricks have been exhausted, resize your QlikView server platform to accomodate your documents (& reloads)

Have a great 2016!

Peter

PS Mapping tables can eat RAM like any other QV internal table. A mapping table with 10.000.000 distinct values will need as much as 1.0GB of RAM. Just to store the lookup table.

brunodec
Contributor III
Contributor III
Author

Thank you everybody for your feedback. I'm sure now it is not possible 🙂

Perhaps an idea for QlikTech to enable removing mapping tables in a future release.

shahamel
Creator
Creator

According to this thread on the Qlik Community (https://community.qlik.com/thread/199468)  you can't
delete a mapping table from it's memory before the script has ended.

To make sure youre mapping tables don't concatenate which each loop they encounter, you have to
follow these instructions:

1. Delete the word MAPPING from your mapping table.
2. Store that table under the same name.
3. Drop that table.
4. Create the original MAPPING Table from the data which you just stored
5. Use an applymap statement which uses the values from that above mentioned MAPPING Table

This method makes sure you use 1 MAPPING table with distinct values where no concatenation has
taken place. The example which i provide underneath show what i just explained

With kind regards

S.H.A.Hamelink

1-3-2018 !

// Step 1.
for each begin in 'Adriaan','B','Shoba'

begin:
Replace
// Mapping
LOAD * INLINE [
ABC, NUM
A,$(begin)
B,Bassie
C,3
];

next

for each begin in 'Afas','B','Ofelia'

begin:
Replace
// Mapping
LOAD * INLINE [
ABC, NUM
A,$(begin)
B,Zorro
C,D
];

next

// Step 2.
store begin into begin.qvd (qvd);


// Step 3.
DROP Table begin;

// Step 4.
begin:
Mapping
LOAD
ABC,
    NUM
FROM
begin.qvd (qvd);

// Step 5.
final:
NoConcatenate
LOAD
ABC,
ApplyValue,
ApplyMap('begin',ABC,null()) as NewestApplyVersion;
LOAD * INLINE [
    ABC, ApplyValue
    A, 1
    B, 2
    C, 3
];

MT
Contributor
Contributor

I've run into this as well. To free the memory just reuse the mapping name and load it with a single record. If you watch the performance monitor while its running you'll see the memory released.

Good Luck!

syedalamdar
Contributor
Contributor

There is a hack you can try. Stop the process where you want to drop the Mapping table and output a QVD. Create a new application and reload the QVD and continue the ETL process. You can chain these apps in the task so the first process is completed and then the second process starts. 

Apart from having multiple other issues with this solution (disk management and code management in general), I don't really recommend this approach but this was a question I thought should have some solution.