<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Bulk Load - From Oracle to AWS RDS in Talend Studio</title>
    <link>https://community.qlik.com/t5/Talend-Studio/Bulk-Load-From-Oracle-to-AWS-RDS/m-p/2371494#M134435</link>
    <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;For bulk insert , please use bulk components of Talend if it your server, or check AWS recommendations for bulk insert.&lt;/P&gt;&lt;P&gt; Generally speaking, the followings aspects could affect the job performance:&lt;/P&gt;&lt;P&gt; 1. The volume of data, read a large of data set, the performance will degrade. For your case, 120000000 rows are a big data set.&lt;/P&gt;&lt;P&gt; 2. The structure of data, if there are so many columns on tOracleInput, it will consume many memory and much time for transferring the data during the job execution.&lt;/P&gt;&lt;P&gt; 3. The database connection, the job always runs better if the database is installed on local, if the database is on another machine, even you are on VPN, you may have the congestion and latency issues.&lt;/P&gt;&lt;P&gt;What's your rate of records/second in your options?&lt;/P&gt;&lt;P&gt;Best regards&lt;/P&gt;&lt;P&gt;Sabrina&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Mon, 18 Oct 2021 09:32:56 GMT</pubDate>
    <dc:creator>Anonymous</dc:creator>
    <dc:date>2021-10-18T09:32:56Z</dc:date>
    <item>
      <title>Bulk Load - From Oracle to AWS RDS</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Bulk-Load-From-Oracle-to-AWS-RDS/m-p/2371493#M134434</link>
      <description>&lt;P&gt;Hi Team,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Good Day!!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt; As part of the migration project, we are trying to load 120000000 rows of data from Oracle to AWS RDS.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt; With below two approaches: &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt; Option 1:  tOracleInput -----------------&amp;gt; &lt;/P&gt;tAmazonMysqlOutput&lt;P&gt;                 Curser (1000) --records were loading per sect only  150 , hence its taking lots time to load the data&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Option 2 :&lt;/P&gt;&lt;P&gt;               tOracleInput -----------------&amp;gt;tMysqlOutputBulk_1 -----&amp;gt;tMySQLBulkExec ( this flow was also very slow ) which &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Could you please suggest to me the best approach to the design? &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We want to load the data in very little time .. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;AChoudhry&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 15 Nov 2024 23:37:17 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Bulk-Load-From-Oracle-to-AWS-RDS/m-p/2371493#M134434</guid>
      <dc:creator>AAdvikC</dc:creator>
      <dc:date>2024-11-15T23:37:17Z</dc:date>
    </item>
    <item>
      <title>Re: Bulk Load - From Oracle to AWS RDS</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Bulk-Load-From-Oracle-to-AWS-RDS/m-p/2371494#M134435</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;For bulk insert , please use bulk components of Talend if it your server, or check AWS recommendations for bulk insert.&lt;/P&gt;&lt;P&gt; Generally speaking, the followings aspects could affect the job performance:&lt;/P&gt;&lt;P&gt; 1. The volume of data, read a large of data set, the performance will degrade. For your case, 120000000 rows are a big data set.&lt;/P&gt;&lt;P&gt; 2. The structure of data, if there are so many columns on tOracleInput, it will consume many memory and much time for transferring the data during the job execution.&lt;/P&gt;&lt;P&gt; 3. The database connection, the job always runs better if the database is installed on local, if the database is on another machine, even you are on VPN, you may have the congestion and latency issues.&lt;/P&gt;&lt;P&gt;What's your rate of records/second in your options?&lt;/P&gt;&lt;P&gt;Best regards&lt;/P&gt;&lt;P&gt;Sabrina&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 18 Oct 2021 09:32:56 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Bulk-Load-From-Oracle-to-AWS-RDS/m-p/2371494#M134435</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2021-10-18T09:32:56Z</dc:date>
    </item>
  </channel>
</rss>

