<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: How to load 500+ Million Records from SQL Server to Snowflake in Talend Studio</title>
    <link>https://community.qlik.com/t5/Talend-Studio/How-to-load-500-Million-Records-from-SQL-Server-to-Snowflake/m-p/2323558#M93375</link>
    <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I would find someway of splitting the incoming data into chunks and loading up in manageable chunks, using some kind of partitioning key (a date is a good example). This would also allow you to build in some recoverability into the process if, as you've already found, the job hangs part way through.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;David&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If you find these answers helpful, don't forget to Like and/or set as the answer&lt;/P&gt;</description>
    <pubDate>Sun, 07 Feb 2021 12:48:47 GMT</pubDate>
    <dc:creator>David_Beaty</dc:creator>
    <dc:date>2021-02-07T12:48:47Z</dc:date>
    <item>
      <title>How to load 500+ Million Records from SQL Server to Snowflake</title>
      <link>https://community.qlik.com/t5/Talend-Studio/How-to-load-500-Million-Records-from-SQL-Server-to-Snowflake/m-p/2323557#M93374</link>
      <description>&lt;P&gt;Hi..&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;How to load 500+ Million Records from SQL Server to Snowflake using Talend.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Currently using tSnowflakeOutputBulkExec component to store the data locally, but it's struckked loading into files after fetching 14+ Million records from SQL Server.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Please see Attached Screenshots:&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;Talend Job Design - Includes Basic Settings of &lt;P&gt;tDBOutputBulkExec &lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;tDBOutputBulkExec Advanced setting&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Any help is greatly appreciated.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you.&lt;/P&gt;&lt;P&gt;Anil&lt;/P&gt;</description>
      <pubDate>Sat, 16 Nov 2024 00:41:01 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/How-to-load-500-Million-Records-from-SQL-Server-to-Snowflake/m-p/2323557#M93374</guid>
      <dc:creator>AKushnapalli</dc:creator>
      <dc:date>2024-11-16T00:41:01Z</dc:date>
    </item>
    <item>
      <title>Re: How to load 500+ Million Records from SQL Server to Snowflake</title>
      <link>https://community.qlik.com/t5/Talend-Studio/How-to-load-500-Million-Records-from-SQL-Server-to-Snowflake/m-p/2323558#M93375</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I would find someway of splitting the incoming data into chunks and loading up in manageable chunks, using some kind of partitioning key (a date is a good example). This would also allow you to build in some recoverability into the process if, as you've already found, the job hangs part way through.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;David&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If you find these answers helpful, don't forget to Like and/or set as the answer&lt;/P&gt;</description>
      <pubDate>Sun, 07 Feb 2021 12:48:47 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/How-to-load-500-Million-Records-from-SQL-Server-to-Snowflake/m-p/2323558#M93375</guid>
      <dc:creator>David_Beaty</dc:creator>
      <dc:date>2021-02-07T12:48:47Z</dc:date>
    </item>
  </channel>
</rss>

