Skip to main content
Announcements
See what Drew Clarke has to say about the Qlik Talend Cloud launch! READ THE BLOG
cancel
Showing results for 
Search instead for 
Did you mean: 
HungryOctopus
Contributor II
Contributor II

Load data into MySQL DB stops without error message with large dataset

I run this job on several data sets and for most of them, it works fine, when there is around 300.000 rows coming from MappedItemAssets. 

HungryOctopus_0-1708588026173.png

But when there is more data (here 625.000 rows), the job eventually won't continue, even if I wait several hours. The job just stops without any error messages. 

HungryOctopus_2-1708588212127.png

I already tried the following, but it didn't solve the problem: 

- Drop and Write table instead of truncate

- change tDBOutput component to a JDBC Component

- Sort on disc the tSortRow, use of disk for the tUniqRow, store on disk for MapAsset

- I also replaced the tDBOutput with a tFileOutputDelimited for control, exactly 98094 rows are written before the job stops.

I use Talend Studio 7.3., the DB is MySQL 5
Any help would be greatly appreciated! 

Thank you

Labels (3)
1 Solution

Accepted Solutions
Shicong_Hong
Support
Support

Hello 

This looks weird if the job stops with any error. This job use many memory-consuming components such as tHashOutput, tSortRow, tUniqRow etc. Using 'store on disk' option is a workaround to resolve the out of memory error. Try to allocate more memory to job execution and avoid using many memory-consuming components tHashInput/tHashOutput, for large of data set.

Regards

Shicong

 

 

View solution in original post

3 Replies
Shicong_Hong
Support
Support

Hello 

This looks weird if the job stops with any error. This job use many memory-consuming components such as tHashOutput, tSortRow, tUniqRow etc. Using 'store on disk' option is a workaround to resolve the out of memory error. Try to allocate more memory to job execution and avoid using many memory-consuming components tHashInput/tHashOutput, for large of data set.

Regards

Shicong

 

 

HungryOctopus
Contributor II
Contributor II
Author

Hi Shicong, 

thanks for your response.

The store on disk option didn't solve my problem, but allocate more memory to the job did (12G instead of 8G) 

HungryOctopus_0-1709036890677.png

Many greetings,
HungryOctopus

jlolling
Creator III
Creator III

If your output is a database table, you do not need to sort the records. A database can do this much more efficient using an appropriated index.