Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Join us in Bucharest on Sept 18th for Qlik's AI Reality Tour! Register Now
cancel
Showing results for 
Search instead for 
Did you mean: 
Anonymous
Not applicable

Oracle Load errors out after 10000 records

Hi
I have a simple load from a flat file into a remote Oracle database. The first 10k records fly into the database but the load errors out afterwards
Starting job LoadHUBAccounts at 15:12 23/03/2009. connecting to socket on port 3827
connected
Exception in component tOracleOutput_1
java.lang.ArrayIndexOutOfBoundsException: -32303
at oracle.jdbc.driver.OraclePreparedStatement.setupBindBuffers(OraclePreparedStatement.java:2673)
at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:10689)
at custody_plus.loadhubaccounts_0_1.LoadHUBAccounts.tFileInputPositional_1Process(LoadHUBAccounts.java:3260)
at custody_plus.loadhubaccounts_0_1.LoadHUBAccounts.runJobInTOS(LoadHUBAccounts.java:3434)
at custody_plus.loadhubaccounts_0_1.LoadHUBAccounts.main(LoadHUBAccounts.java:3343)
disconnected
I'm assuming this is an Oracle driver issue but not sure.
Cheers
Al
Labels (5)
16 Replies
Anonymous
Not applicable
Author

Check your advanced settings tab - there is likely a setting similar to "Commit Every XXXX rows"
If that's the case and it's set to 10,000 - we're on to something.
Also, Do those first 10,000 records actually get loaded to the database? Have you confirmed they're actually in the table?
Anonymous
Not applicable
Author

I raised the number if records before commit to 100k and all my rows inserted no problem. The file had 65k records so it never reached the commit point.
Previously with the commit point at 10k the first 10k records inserted and then the job errored out as above.
I'll try to re-create with the larger commit size at 100k.
_AnonymousUser
Specialist III
Specialist III

Hi,
I have the same problem in a job inserting in an oracle table.
If I set the commit interval between 5000 and 20 000, the job abort after the second commit with this error :
Exception in component tOracleOutput_1
java.lang.ArrayIndexOutOfBoundsException
at oracle.jdbc.driver.OraclePreparedStatement.setupBindBuffers(OraclePreparedStatement.java:2673)
at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:10689)
at dev_nsa.fichier_flabe_xml_0_1.Fichier_FLABE_XML.tFileInputDelimited_1Process(Fichier_FLABE_XML.java:11179)
at dev_nsa.fichier_flabe_xml_0_1.Fichier_FLABE_XML.runJobInTOS(Fichier_FLABE_XML.java:13402)
at dev_nsa.fichier_flabe_xml_0_1.Fichier_FLABE_XML.main(Fichier_FLABE_XML.java:13304)

If I set the commit interval to 2000 or 50 000, it works. (slowly, but it works)
I use a tOracleOutput with an Oracle SID connexion and Oracle 10.
With Talend Open Studio 3.1.0.RC1_r23203.
Thank you,
Nicolas
_AnonymousUser
Specialist III
Specialist III

For information :
The problem was solved in the last version of TOS : 3.1.0.r24382.
Nicolas
Anonymous
Not applicable
Author

This error has re-appeared! I was running Jobs in Open Studio V4.2.2 and the Jobs were working correctly. However, my management asked me to investigate the MDM tool therefore I downloaded MDM CE V4.2.3 r67267 and installed this. I migrated my Jobs into the new version and tried to run them. They errored. On investigation the error was exactly as described in this post. The fail only happens with UPDATE clause in a tOracleOutput. When set to 10 000 rows it errors as above, I raised it to 1 000 000 rows and the fault disappeared and the Job completed successfully. This is a real pain as to test the Jobs in the MDM environment I now have to rework all the affected Jobs to take account of this.
_AnonymousUser
Specialist III
Specialist III

I managed to circumvent this bug not by increasing the size of the commit but in contrary to make it much smaller (10 rows). I have a process with over 500k rows so this is quite slow but this could be another solution to this bug that continues to appear in Version: 5.4.1, Build id: r111943-20131212-1133 (community edition).
I tried to increase the commit to 1M rows but I hit Javaheap issue there, this is why I just tried to make my commits smaller.
Anonymous
Not applicable
Author

It is a miracle to me. We output data to an Oracle database and usually work with a batch size of 10k and commit size 10k and it works well.
It is not a bug from Talend it is an bug of the Oracle JDBC driver. You should check if you can replace the driver or update it.
Here an outdated discussion which shows, this error happens to other developer too:
https://community.oracle.com/message/2255186
Anonymous
Not applicable
Author

This is also a problem in 5.5 running on SQL Server 2014.  The solution is to select the output component advanced settings and disable "Use Batch Size".  The commit size does not matter.
Anonymous
Not applicable
Author

Using Oracle 11g and Talend Open Studio Build id: V5.6.1_20141207_1530 :cool:, I found that setting your commit to 1000 as opposed to the standard 10,000 does the trick and imports all data in.  Not sure why, just tried it (based on prior comment) and confirmed on several data streams that this is the case.
0683p000009MDlz.png