Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Join us in NYC Sept 4th for Qlik's AI Reality Tour! Register Now
cancel
Showing results for 
Search instead for 
Did you mean: 
yulongguan
Contributor II
Contributor II

Heap size Memory Settings

Hi Talend support team,

 

Currently we are having some confusion with the priority order of the following when a job runs in TMC:

We have set the 

  • TRE Memory Settings configured in a TMC Run Profile
  • TRE Memory Settings configured within a TRE ....wrapper.conf file
  • Memory Settings configured in the Talend Studio instance that publishes the job
    • I.e. As configured in 'Window-->Preferences-->Talend-->Run/Debug' 

We would like to understand the difference between the 3 approaches and also how is one overwrite the changes of the other ? 

Labels (2)
3 Replies
quentin-vigne
Partner - Creator II
Partner - Creator II

Hi @yulongguan 

 

So here in reality you have 2 different things : 

1 - The wrapper.conf file.

Chaning the java memory in this file will only allow the Remote Engine service to use that amount of memory. This doesn't affect the job execution, only the service. It can help it start itself / manage jobs 

 

2 - The Xmx java setting.

By default every job use -Xms256M and -Xmx1024M if you change it in the studio, then the job will use those values. If for example you use -Xmx4G in the studio, your job will be able to use up to 4G of memory.

If you publish this same job to the TMC with this setting, the settings are also going to be published, meaning the job will be able to use up to 4G of memory.

 

BUT if you setup a Run Profile with for example -Xmx8G and assign it to your job, it will override what you used in the studio, and now the job will be able to use up to 8G of RAM

 

My advice would be to : let the wrapper file like it it (you can't put more than 4G anyway) and use whatever you want in the studio for your local test, except if you use your Remote Engine from the studio.

For every job in the TMC : assign a Run Profile. Typically we have a 32 Run Profile (one for each G between 1 and 32) and we manage our job that way and this allow us to know easily how much gigas are used at any moment.

 

I hope this cleared all your questions

 

- Quentin

yulongguan
Contributor II
Contributor II
Author

Hi @quentin-vigne Thanks for the detailed explanation.

There is however something that remains unclear to me and that relates to the memory assigned to child jobs executed via a tRunJob component in TMC. My assumption is:

  • When the 'Use an independent process to run subjob' setting in the tRunJob component is ON, then the child job is provided with the memory configured in the Run Profile in TMC
  •  When the 'Use an independent process to run subjob' setting in the tRunJob component is OFF (as in all our Parent/Child jobs), then the child job is provided with the memory settings configured for the child job in Studio.

Am i right to say so? Thanks Once again 

quentin-vigne
Partner - Creator II
Partner - Creator II

Let's say we have a job called "Parent". Inside it we have a "Child" job called by a tRunJob component.

We setup in the TMC :

- Parent : 4G memory

- Child : 1G memory

 

Now for the first scenario : 

1 - If Use an independent process to run subjob is ON : Then the "Child" job will start it's own JVM with the parameters you gave him. Meaning : 1G of RAM

2 - If Use an independent process to run subjob is OFF : Then the "Child" job will use the current JVM, meaning the 4G available from the "Parent" job

 

Example : 

quentinvigne_0-1744193017640.png

 

You can use this command to display the current max memory available (the first inside the studio, and the second if you want to display it using the log-level = info in the TMC)

System.out.println("This is the memory from the Parent : " + Runtime.getRuntime().maxMemory()); 


log.info("This is the memory from the Parent : " + Runtime.getRuntime().maxMemory());

 

 

If this helped, don't forget to give a like / accept it as a solution 

 

- Quentin