<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Reusable job to capture Source record count and Target record count in Talend Studio</title>
    <link>https://community.qlik.com/t5/Talend-Studio/Reusable-job-to-capture-Source-record-count-and-Target-record/m-p/2241904#M28912</link>
    <description>Hi Nikhil ..thanks for your response on this. 
&lt;BR /&gt;Current design is as below. 
&lt;BR /&gt;tMSSqlInput--&amp;gt;tHDFSOutput--&amp;gt;tHiveLoad. 
&lt;BR /&gt;In this case my intention is not to query the source /target again just for counts capturing .l am looking if there is any way to capture using the existing tMSSqlInput and tHiveload componets on the fly (something like flowmeter) in joblet..In any case looks like I need to modify all the 150+ jobs or as suggested by DGM redesign 150+ jobs to single dynamic job..please advise if any other thoughts 
&lt;BR /&gt;</description>
    <pubDate>Mon, 22 Jul 2019 16:33:54 GMT</pubDate>
    <dc:creator>Anonymous</dc:creator>
    <dc:date>2019-07-22T16:33:54Z</dc:date>
    <item>
      <title>Reusable job to capture Source record count and Target record count</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Reusable-job-to-capture-Source-record-count-and-Target-record/m-p/2241900#M28908</link>
      <description>Hi All, I am new to talend and did created many reusable mapplets in informatica.We have around 150+ talend jobs created couple years ago which read from sql server source db and write to hive..All these are missing Source read and target write count capturing.&lt;BR /&gt;Can you please advise if we can create a reusable joblet to capture this count and plug in this joblet to all 150+ talend jobs.&lt;BR /&gt;Current design of talend jobs(version 7.1) :&lt;BR /&gt;tMSSqlInput--&amp;gt;tHDFSOutput--&amp;gt;tHiveLoad&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sat, 16 Nov 2024 05:12:23 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Reusable-job-to-capture-Source-record-count-and-Target-record/m-p/2241900#M28908</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2024-11-16T05:12:23Z</dc:date>
    </item>
    <item>
      <title>Re: Reusable job to capture Source record count and Target record count</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Reusable-job-to-capture-Source-record-count-and-Target-record/m-p/2241901#M28909</link>
      <description>&lt;P&gt;In case you are dealing with huge volume of data, I will suggest you separate extraction an loading into two different jobs&lt;/P&gt;
&lt;P&gt;Commercial version offers a simple way to build generic job using the datatype “dynamic”.&lt;/P&gt;
&lt;P&gt;On TOS, I thinks we can find a &amp;nbsp;complex way to build something similar&lt;/P&gt;
&lt;P&gt;You can use context to handle some configurations&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;It's what we did in the past case where we were importing data from 150 sources, around 50 different schemas and a total of about 1 billion records per day.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 22 Jul 2019 14:39:27 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Reusable-job-to-capture-Source-record-count-and-Target-record/m-p/2241901#M28909</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2019-07-22T14:39:27Z</dc:date>
    </item>
    <item>
      <title>Re: Reusable job to capture Source record count and Target record count</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Reusable-job-to-capture-Source-record-count-and-Target-record/m-p/2241902#M28910</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&amp;nbsp; &amp;nbsp; Since you are specifing about count details, I assume the schema for your joblet will remain same for multple tables. In this case, you can pass the other details as parameters to both source and target components (like tablename, query where clause etc.)&amp;nbsp;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&amp;nbsp; &amp;nbsp; Joblet is nothing but a part of the job which you are moving as separate entity either due to its complexity or to make it as a reusable part for your jobs. Please create your jobs and if there are any errors, please share the details of the job along with job flow and other screenshots for further analysis.&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;Warm Regards,&lt;BR /&gt;Nikhil Thampi&lt;/P&gt; 
&lt;P&gt;Please appreciate our Talend community members by giving Kudos for sharing their time for your query. If your query is answered, please mark the topic as resolved &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 22 Jul 2019 14:46:33 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Reusable-job-to-capture-Source-record-count-and-Target-record/m-p/2241902#M28910</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2019-07-22T14:46:33Z</dc:date>
    </item>
    <item>
      <title>Re: Reusable job to capture Source record count and Target record count</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Reusable-job-to-capture-Source-record-count-and-Target-record/m-p/2241903#M28911</link>
      <description>Thanks DGM on your suggestion.We were thinking about this single job with dynamic option instead of independent 150+jobs but it may require lot of reg testing after configuring all source sql as context etc...So checking all the options to not to touch existing jobs and just to add on the fly source /target count capture logic</description>
      <pubDate>Mon, 22 Jul 2019 16:23:16 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Reusable-job-to-capture-Source-record-count-and-Target-record/m-p/2241903#M28911</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2019-07-22T16:23:16Z</dc:date>
    </item>
    <item>
      <title>Re: Reusable job to capture Source record count and Target record count</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Reusable-job-to-capture-Source-record-count-and-Target-record/m-p/2241904#M28912</link>
      <description>Hi Nikhil ..thanks for your response on this. 
&lt;BR /&gt;Current design is as below. 
&lt;BR /&gt;tMSSqlInput--&amp;gt;tHDFSOutput--&amp;gt;tHiveLoad. 
&lt;BR /&gt;In this case my intention is not to query the source /target again just for counts capturing .l am looking if there is any way to capture using the existing tMSSqlInput and tHiveload componets on the fly (something like flowmeter) in joblet..In any case looks like I need to modify all the 150+ jobs or as suggested by DGM redesign 150+ jobs to single dynamic job..please advise if any other thoughts 
&lt;BR /&gt;</description>
      <pubDate>Mon, 22 Jul 2019 16:33:54 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Reusable-job-to-capture-Source-record-count-and-Target-record/m-p/2241904#M28912</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2019-07-22T16:33:54Z</dc:date>
    </item>
  </channel>
</rss>

