<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic How to process Sql server data (Approx 1.3 millions records) in Talend in chunks in Talend Studio</title>
    <link>https://community.qlik.com/t5/Talend-Studio/How-to-process-Sql-server-data-Approx-1-3-millions-records-in/m-p/2356672#M122140</link>
    <description>&lt;P&gt;How to process SQL server data  (Approx 1.3 million records) in Talend in chunks&lt;/P&gt;&lt;P&gt;&lt;B&gt;Hi,&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;&amp;nbsp;&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;We need to process &lt;/B&gt;Sql server data &lt;B&gt; [Approx 1.3 millions) in Talend.&amp;nbsp;When we are trying to process all the records in one go, it's giving "Heap space - out of memory" error in Talend. We have tried all the ways to increase the JVM memory size but it's not working out. Probably because we have complex logic in worflow.&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;&amp;nbsp;&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;So now we are looking to process data in chunks but not sure how can it be done in Talend. Currently, we are pulling sql server data using the "SqlserverInput" component.&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;I am not using tMap and its direct load, no transformation&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;&amp;nbsp;&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;Could anyone please advise on this?&amp;nbsp;&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;&amp;nbsp;&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;Thanks in advance!!&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;&amp;nbsp;&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;Regards,&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;Ram&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
    <pubDate>Fri, 15 Nov 2024 21:29:38 GMT</pubDate>
    <dc:creator>Vthota1687286448</dc:creator>
    <dc:date>2024-11-15T21:29:38Z</dc:date>
    <item>
      <title>How to process Sql server data (Approx 1.3 millions records) in Talend in chunks</title>
      <link>https://community.qlik.com/t5/Talend-Studio/How-to-process-Sql-server-data-Approx-1-3-millions-records-in/m-p/2356672#M122140</link>
      <description>&lt;P&gt;How to process SQL server data  (Approx 1.3 million records) in Talend in chunks&lt;/P&gt;&lt;P&gt;&lt;B&gt;Hi,&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;&amp;nbsp;&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;We need to process &lt;/B&gt;Sql server data &lt;B&gt; [Approx 1.3 millions) in Talend.&amp;nbsp;When we are trying to process all the records in one go, it's giving "Heap space - out of memory" error in Talend. We have tried all the ways to increase the JVM memory size but it's not working out. Probably because we have complex logic in worflow.&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;&amp;nbsp;&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;So now we are looking to process data in chunks but not sure how can it be done in Talend. Currently, we are pulling sql server data using the "SqlserverInput" component.&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;I am not using tMap and its direct load, no transformation&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;&amp;nbsp;&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;Could anyone please advise on this?&amp;nbsp;&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;&amp;nbsp;&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;Thanks in advance!!&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;&amp;nbsp;&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;Regards,&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;Ram&lt;/B&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 15 Nov 2024 21:29:38 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/How-to-process-Sql-server-data-Approx-1-3-millions-records-in/m-p/2356672#M122140</guid>
      <dc:creator>Vthota1687286448</dc:creator>
      <dc:date>2024-11-15T21:29:38Z</dc:date>
    </item>
    <item>
      <title>Re: How to process Sql server data (Approx 1.3 millions records) in Talend in chunks</title>
      <link>https://community.qlik.com/t5/Talend-Studio/How-to-process-Sql-server-data-Approx-1-3-millions-records-in/m-p/2356673#M122141</link>
      <description>&lt;P&gt;Without seeing what your job is doing it’s hard to say, giving the Talend job extra memory isn’t really the solution here. For component like TMap, sort and uniq, they have advanced parameters to utilise temporary files on the server as an offline memory story to stop this. &lt;/P&gt;&lt;P&gt;&lt;A href="https://help.talend.com/r/en-US/8.0/tmap/tmap?tocId=rNWfV5QzQlTBh66dwlMRMA" alt="https://help.talend.com/r/en-US/8.0/tmap/tmap?tocId=rNWfV5QzQlTBh66dwlMRMA" target="_blank"&gt;https://help.talend.com/r/en-US/8.0/tmap/tmap?tocId=rNWfV5QzQlTBh66dwlMRMA&lt;/A&gt;&lt;/P&gt;&lt;P&gt;like temp data directory path.&lt;/P&gt;</description>
      <pubDate>Mon, 02 Oct 2023 09:40:57 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/How-to-process-Sql-server-data-Approx-1-3-millions-records-in/m-p/2356673#M122141</guid>
      <dc:creator>David_Beaty</dc:creator>
      <dc:date>2023-10-02T09:40:57Z</dc:date>
    </item>
  </channel>
</rss>

