Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Qlik Open Lakehouse is Now Generally Available! Discover the key highlights and partner resources here.
cancel
Showing results for 
Search instead for 
Did you mean: 
Anonymous
Not applicable

I get out of memory error when i select a Clob

Dear All,
In my job, i create the content of a Clob in a database.
After that i want to select this content and write it in a flat file.
But when the job try to select this content with length of 600.000.000 it gives me out of memory message.
Event with length of 200.000.000 it gives me also the same error.
Someone know how to resolve this issue?
thanks in advance for the help 0683p000009MACn.png
0683p000009MD05.jpg
Labels (2)
9 Replies
Anonymous
Not applicable
Author

Hi,
Please take a look at KB articles about TalendHelpCenter:Exception outOfMemory and TalendHelpCenter0683p000009M9p6.pngtoring the lookup flow of tMap on the disk.
Best regards
Sabrina
Anonymous
Not applicable
Author

It's a typical Java error when you address too less memory (called Java heap space). Try to address more memory - see start job tab and go to "Advanced settings". Here you can add further java parameter, e.g. "-Xms256M" (job requires a minimum of 256 MB) and "-Xmx1024M" (maximum memory); I hope this is correct. Anyway, just try to use these parameters to make sure you address enough memory. 64 bit systems are more efficient than 32 bit systems.
If the job still doesn't run, you may have to modify your job (for example split into an extraction job and an output job) or deploy your script on another server.
Anonymous
Not applicable
Author

Hi,
I already address it 2048 MB. I cant give more.
I will try to store the lookup flow on disk.
Thanks,
Anonymous
Not applicable
Author

Hi Christ83,
Is there any update for your issue?
Best regards
Sabrina
Anonymous
Not applicable
Author

Hi Chris83,
please keep in mind you do not read only one line and one CLOB with 600Mio bytes. The JDBC driver usually reads more the one record at once.
Please try to set the cursor size to 1. This will slow down your read speed but avoids memory overflows.
Anonymous
Not applicable
Author

Dear all,
I tried to set the cursor size to 1 but it didn't solve my issue. I still get memory overflows.
I still have to try the storing of the lookup.
Is there no other solution to avoid this memory overflows?
Thanks a lot,
Anonymous
Not applicable
Author

Dear all again,
I just finish my test with the lookup solution but i got another error in that case :
Exception in component java.util.ConcurrentModificationException
but i dont know where there is a concurrent modification.
Best regards,
Christ83
Anonymous
Not applicable
Author

Hi Christ83
I am thinking about a component to download BLOBs and CLOBs.
The internal procedure have to be:
1. configure which column is a BLOB and CLOB
2. configure where to download the files
3. configure the names for the files (currently I am thinking about a chain of table+pk-fields+column name)
4. start selecting and download the LOBs and create the files with the given name pattern
Could this fit to your needs?
Anonymous
Not applicable
Author

Hi jlolling,
It should be a really nice component? you will implement it?
Anyway, i find a solution to my problem. I don't know yet if it will be a good solution for bigger file/CLOB than the one i use or if it's just a cheat.. 0683p000009MACn.png
In the beginning, i wanted to do my steps in different blocs.
Those steps are :
- Created CLOB content (it was created in different blocs)
- Instering The content in CLOB field in DB.
- Select the CLOB content and write it in a flat file.
In the end, the real issue was to write it in a flat file. it gave me out of memory error.
I decide, after to many tests, to restart from the beginning.
My new shema is as shown on the picture linked.
Before that bloc i create a global variable which is a "new Stringbuffer()"
This stringbuffer will contain the content. (the future CLOB)
The iterate links and the 2 links on the top are just there to create the content (contents if the jdbc_input on the left gives more than 1 result)
The interesting part is the third link.
the tRowGenerator will send the StringBuffer through the tMap directly in the DB (in the CLOB field) (without problem with huge content).
it will also be send to a tNormalize. This component will split the content in multiple lines instead of the one created.
The job will be able to write the flat file like this.
I imagine it isn't the best way to do that because in a way, we construct the content as 1 row and after we split it again. but it works like this.

This solution allowed me to write the content as 1 row in the Clob (couldn't do an append in DB the field) and to write the content in an output flat file.

i will try in the future with bigger content and will see if it still works
Best regards,
Christ83
0683p000009MD0A.jpg 0683p000009MD0F.jpg