<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Parallelize the subjob in Talend Studio</title>
    <link>https://community.qlik.com/t5/Talend-Studio/Parallelize-the-subjob/m-p/2222335#M15989</link>
    <description>&lt;P&gt;I need to create a job which will ingest list of tables by sqooping data from source (RDBMS) to hadoop then to hive.&lt;/P&gt; 
&lt;P&gt;I put list of tables in a file, read and iterate it to ingested.&lt;/P&gt; 
&lt;P&gt;Because I have about 300+ tables to ingest, so it will take time if it ingested just by a process. So I need to parallelize it.&lt;/P&gt; 
&lt;P&gt;What I currently think is, the job will read the list of tables then split it to 10 tables per array. Each arrays then passed to the subjob to processed.&lt;/P&gt; 
&lt;P&gt;I already do this logic by implement it in spark scala code. The problem is, we need to move it to be a talend job so it will be more easy to monitor and maintenance by operation team since the only familiar with talend while I don't know how to implement this logic in talend.&lt;/P&gt; 
&lt;P&gt;I will appreciate any help. Thanks.,&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Sat, 16 Nov 2024 07:34:20 GMT</pubDate>
    <dc:creator>Anonymous</dc:creator>
    <dc:date>2024-11-16T07:34:20Z</dc:date>
    <item>
      <title>Parallelize the subjob</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Parallelize-the-subjob/m-p/2222335#M15989</link>
      <description>&lt;P&gt;I need to create a job which will ingest list of tables by sqooping data from source (RDBMS) to hadoop then to hive.&lt;/P&gt; 
&lt;P&gt;I put list of tables in a file, read and iterate it to ingested.&lt;/P&gt; 
&lt;P&gt;Because I have about 300+ tables to ingest, so it will take time if it ingested just by a process. So I need to parallelize it.&lt;/P&gt; 
&lt;P&gt;What I currently think is, the job will read the list of tables then split it to 10 tables per array. Each arrays then passed to the subjob to processed.&lt;/P&gt; 
&lt;P&gt;I already do this logic by implement it in spark scala code. The problem is, we need to move it to be a talend job so it will be more easy to monitor and maintenance by operation team since the only familiar with talend while I don't know how to implement this logic in talend.&lt;/P&gt; 
&lt;P&gt;I will appreciate any help. Thanks.,&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 16 Nov 2024 07:34:20 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Parallelize-the-subjob/m-p/2222335#M15989</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2024-11-16T07:34:20Z</dc:date>
    </item>
    <item>
      <title>Re: Parallelize the subjob</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Parallelize-the-subjob/m-p/2222336#M15990</link>
      <description>Hi mahadi-siregar
&lt;BR /&gt;You can try checked the option 'Enable parallel execution' on the basic settings panel of iterate link, and check 'Use an indepentent process to run subjob' option on tRunJob (call the child job and pass the current table name to child job). 
&lt;BR /&gt;Let me know if this way could improve the performance。
&lt;BR /&gt;
&lt;BR /&gt;Regards
&lt;BR /&gt;Shong
&lt;BR /&gt;</description>
      <pubDate>Fri, 19 Oct 2018 03:11:03 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Parallelize-the-subjob/m-p/2222336#M15990</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2018-10-19T03:11:03Z</dc:date>
    </item>
  </channel>
</rss>

