<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Postgres data reading stream performance &amp; memory used in Talend Studio</title>
    <link>https://community.qlik.com/t5/Talend-Studio/Postgres-data-reading-stream-performance-memory-used/m-p/2315357#M86019</link>
    <description>Thanks. 
&lt;BR /&gt;I can find the cursor option. 
&lt;BR /&gt;We are using Enterprise database Postgres(EDB) instead of open source.What is the difference between these tpostgressql and tpostgresqlplus component?&amp;nbsp;</description>
    <pubDate>Mon, 20 Mar 2017 08:08:08 GMT</pubDate>
    <dc:creator>Anonymous</dc:creator>
    <dc:date>2017-03-20T08:08:08Z</dc:date>
    <item>
      <title>Postgres data reading stream performance &amp; memory used</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Postgres-data-reading-stream-performance-memory-used/m-p/2315355#M86017</link>
      <description>Using Big data enterprise edition. Help me with the below job scenario. 
&lt;BR /&gt;PostgresInput------------------------------------&amp;gt;File 
&lt;BR /&gt;All the ETL transformation was written as SQL query and called inside postgressql input component. SQL output was written to a file in the same job. 
&lt;BR /&gt;For triggering the job, we are using ETL server. Postgres DB was present in different server. 
&lt;BR /&gt;Following questions hits my mind. 
&lt;BR /&gt;a)Ran the job with datasize around 4 gb. Job failed, if i allocate JVM less that 4gb. Talend keeps all the data(4gb) in memory(RAM). It look like pipeline doesn't work in this case, or postgres pipeline streaming doesn't work?&amp;nbsp; 
&lt;BR /&gt;If some bytes of data was read, it should write the data in file and release the buffer. right? 
&lt;BR /&gt;I'm pushing all the load to DB, which was present in different server. ETL server is used only for I/O process. Why it occupies more RAM in ETL server? Am i missing anything? Please suggest. 
&lt;BR /&gt;Thanks 
&lt;BR /&gt;&amp;nbsp;&amp;nbsp;</description>
      <pubDate>Sat, 16 Nov 2024 09:58:55 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Postgres-data-reading-stream-performance-memory-used/m-p/2315355#M86017</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2024-11-16T09:58:55Z</dc:date>
    </item>
    <item>
      <title>Re: Postgres data reading stream performance &amp; memory used</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Postgres-data-reading-stream-performance-memory-used/m-p/2315356#M86018</link>
      <description>on Advanced settings - checkbox "Use Cursor"&amp;nbsp;
&lt;BR /&gt;after adjust number of rows to send
&lt;BR /&gt;for final performance affect 2 "variables":
&lt;BR /&gt;
&lt;BR /&gt;number of rows
&lt;BR /&gt;size of data
&lt;BR /&gt;You can play with both - reduce number of rows for bigger data (BLOB, TEXT column) or increase it&amp;nbsp;</description>
      <pubDate>Sun, 19 Mar 2017 20:37:39 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Postgres-data-reading-stream-performance-memory-used/m-p/2315356#M86018</guid>
      <dc:creator>vapukov</dc:creator>
      <dc:date>2017-03-19T20:37:39Z</dc:date>
    </item>
    <item>
      <title>Re: Postgres data reading stream performance &amp; memory used</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Postgres-data-reading-stream-performance-memory-used/m-p/2315357#M86019</link>
      <description>Thanks. 
&lt;BR /&gt;I can find the cursor option. 
&lt;BR /&gt;We are using Enterprise database Postgres(EDB) instead of open source.What is the difference between these tpostgressql and tpostgresqlplus component?&amp;nbsp;</description>
      <pubDate>Mon, 20 Mar 2017 08:08:08 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Postgres-data-reading-stream-performance-memory-used/m-p/2315357#M86019</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2017-03-20T08:08:08Z</dc:date>
    </item>
    <item>
      <title>Re: Postgres data reading stream performance &amp; memory used</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Postgres-data-reading-stream-performance-memory-used/m-p/2315358#M86020</link>
      <description>don't know, I use only OpenSource version of Postgres, so not familiar with EDB features</description>
      <pubDate>Mon, 20 Mar 2017 14:51:15 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Postgres-data-reading-stream-performance-memory-used/m-p/2315358#M86020</guid>
      <dc:creator>vapukov</dc:creator>
      <dc:date>2017-03-20T14:51:15Z</dc:date>
    </item>
    <item>
      <title>Re: Postgres data reading stream performance &amp; memory used</title>
      <link>https://community.qlik.com/t5/Talend-Studio/Postgres-data-reading-stream-performance-memory-used/m-p/2315359#M86021</link>
      <description>Thanks. I have set Cursor as 100000. Found the huge improvement in performance.&amp;nbsp; 
&lt;BR /&gt;With 1GB of JVM, job completed succesfully without any memory issues. 
&lt;BR /&gt;In one case, reading 1 million records from postgres, but i found only 0.2 million were read and job completed successfully. What would be the issue? 
&lt;BR /&gt;Thought because of AUTOCOMMIT set to TRUE. But it was enabled in CODE.&amp;nbsp;</description>
      <pubDate>Tue, 21 Mar 2017 14:04:13 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/Postgres-data-reading-stream-performance-memory-used/m-p/2315359#M86021</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2017-03-21T14:04:13Z</dc:date>
    </item>
  </channel>
</rss>

