Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Join us in Toronto Sept 9th for Qlik's AI Reality Tour! Register Now
cancel
Showing results for 
Search instead for 
Did you mean: 
Anonymous
Not applicable

Performance issue with Redshift component

We are facing a problem with the huge data load in Redshift database.
There is a job which loads the data from Redshift stage1 table to stage2 table with all the transformations given. From all the lookups which are used in this job are loading fine. But the main flow is not able to fetch the huge amount of data(10 million). After some amount of time job gets failed with error "Exception in thread "Thread-0" java.lang.OutOfMemoryError: Java heap space".

Have tried the below options to execute the job:
1. Increasing the JVM parameters
-Xms3072M
-Xmx6144M
2. Disk storage option in the tmap.

 

Using the Talend version 6.5.1. Please find the attachment of the job.

Labels (3)
1 Solution

Accepted Solutions
Anonymous
Not applicable
Author

We got the solution for this. 

Have used the "cursor" option which is available in tRedshiftInput component. Now we are able to process ~15 million data.

View solution in original post

3 Replies
Anonymous
Not applicable
Author

Hello,

Could you please let us know if this online KB article helps?

https://community.talend.com/t5/Migration-Configuration-and/OutOfMemory-Exception/ta-p/21669

Best regards

Sabrina

Anonymous
Not applicable
Author

We got the solution for this. 

Have used the "cursor" option which is available in tRedshiftInput component. Now we are able to process ~15 million data.

sushantk19
Creator
Creator

@rsunkavaa : so we need to just check this option in "Advanced settings" of job? thats the only change?because i also need to load historical data worth 3 to 5 million. I also face a similar problem