Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
We are facing a problem with the huge data load in Redshift database.
There is a job which loads the data from Redshift stage1 table to stage2 table with all the transformations given. From all the lookups which are used in this job are loading fine. But the main flow is not able to fetch the huge amount of data(10 million). After some amount of time job gets failed with error "Exception in thread "Thread-0" java.lang.OutOfMemoryError: Java heap space".
Have tried the below options to execute the job:
1. Increasing the JVM parameters
-Xms3072M
-Xmx6144M
2. Disk storage option in the tmap.
Using the Talend version 6.5.1. Please find the attachment of the job.
We got the solution for this.
Have used the "cursor" option which is available in tRedshiftInput component. Now we are able to process ~15 million data.
Hello,
Could you please let us know if this online KB article helps?
https://community.talend.com/t5/Migration-Configuration-and/OutOfMemory-Exception/ta-p/21669
Best regards
Sabrina
We got the solution for this.
Have used the "cursor" option which is available in tRedshiftInput component. Now we are able to process ~15 million data.
@rsunkavaa : so we need to just check this option in "Advanced settings" of job? thats the only change?because i also need to load historical data worth 3 to 5 million. I also face a similar problem