
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
OutofMemory Exception - Heap space + GC overlaod
Hello all,
I am currently working with large data and trying to produce an xml file as final output.
The tests were successful with sample data; however with large data, i am encountering the exception below :
- java.lang.OutOfMemoryError: Java heap space
- java.lang.OutOfMemoryError: GC overhead limit exceeded
In fact, I got 3 csv files in my job.
standard : 88,151 rows (main)
personal : 5,900,000 rows (lookup)
address : 230,000 rows (lookup)
1 standard row is linked with 75 personal rows and 15 address rows approx.
First of all, i have tried to use a thashoutput to keep the data in memory to see how it processes.
Secondly i have also tried to generate lookup file in delimited (csv) rather than keeping it in memory:
Please note that i cannot use temp directory storage to do the lookup since i am using txmlmap; this option is not available
From investigation, i have also tried to increase the JVM arguments:
PC RAM : 8 Gb
-Xmx4096M
-Xms2048M
As a result only 9,500 over 88,151 were processed and thus ending with the mentioned outofmemory exception.
Can you advice or propose me something propose? Thank you.
Accepted Solutions

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In terms of CSV vs memory, memory is quicker. But you are struggling with memory at the moment, so solve that problem first then look at making it faster.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This isn't guaranteed to work, but it might help. You say that you cannot use temp directory storage because you are using a tXMLMap. Could you try joining your data in a tMap (and using the temp directory storage), releasing the memory used by the tHash components (by ticking "Clear cache after reading"), filtering your joined data set to just the essential data, then outputting that to a new tHash. Then in another subjob build the XML with the tXMLMap, reading from the tHash.
EDIT: One more thing I just remembered (maybe try this first before the other changes), set the "custom the flush buffer size" setting to something like 1000 rows (and experiment). Otherwise the whole data set will end up in memory before it is written to the file.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
Thank you for your reply.
Can you please confirm me if working directly with csv files is faster than using memory storage (tbuffer, thash)?
If so, i am trying to do the lookup using csv files and storing them in temp directory just before the txmlmap.
Hope it works.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have now the following error while trying to convert the xml document to string :
Can you advice or propose me some solution please?

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In terms of CSV vs memory, memory is quicker. But you are struggling with memory at the moment, so solve that problem first then look at making it faster.
