Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello all I am in need of some major help.
I've been running queries and uploads for at least 2 to 3 years now on the system, but lately the processes have been going slow and I checked a log file, and this is the error that it shows.
value used for ROWS parameter changed from 64 to 55
SQL*Loader-643: error executing INSERT statement for table TABLE_NAME
ORA-03113: end-of-file on communication channel
Process ID: 15816
Session ID: 341 Serial number: 13438
SQL*Loader-2026: the load was aborted because SQL Loader cannot continue.
Specify SKIP=5830 when continuing the load.
Any input would be greatly appreciated. This process I have set to run using windows task manager every 30 minutes everyday... and it was going fine.
This particular connection connects to an openedge database, runs a query and then uploads to an oracle server database using oracle apex.
Thank you So much in advance.
Hello,
We need a little bit more information to address your issue.
Could you please clarify in which Talend version/edition you are?
As far as we known that, we are able to navigate to Window > Preferences > Talend > Performance to increase timeout value. Does it helps?
Best regards
Sabrina
I will try this. Any idea as to why it was working for 2 to 3 years and now all of a sudden it is giving me a problem?
What connection timeout should I set it to... right now connection timeout at 15 seconds .... and the code format timeout is 30 seconds.... and the hbase / mapr-db scan limit is 50
I am using Talend Open Studio for Big Data Version 7.2.1
Thank you for responding.
Hello,
Let's find out the problem step by step.
Have you ever been experiencing network issues here at your site which could cause brief connectivity issues? We suspect the connectivity to your database server is dropping.
Does this issue also repro on other talend build version? Talend open studio for bigdata version 8.0?
Best regards
Sabrina
I am not 100% sure if it is dropping, but sometimes during the day it is faster than others. Is there anyway I can test the connection between my computer and the server that the database is hosted on? For the most part, our internet connection is great and has a good speed (190MB down and 190MB up - we have fiber at the location this computer is) but the server I am connecting to is located at https://apexhostingservices.com/ and I have databases on these servers and I am not sure how to check the connection on them compared to my internet.
Also when I run the jobs, I have a lot of files in the lib folder, would that cause a slow down?
I am running JAVA version 8 update 333 (build 1.8.0_333-bo2)
And for this connection I use the openedge.jar
"com.ddtek.jdbc.openedgebase.BaseDriver"
Hello,
Is it oracle database for you?
ORA-03113 on the client side is one of the oracle catch-all error that is thrown. Could you please check the alert log and trace files of your DB to see if your ARCHIVED LOG is full? If so, you need to delete expired archive log all.
Best regards
Sabrina
I'm not sure how to check the alert log and trace the files of the DB... I usually login through APEX. I can get to the INTERNAL Workspace, but is there a tool that I can use to access the oracle database? I usually contact the support team when I have questions, but I am not sure what to ask them for this issue... I sent them the error I posted here, and they said this back to me:
Upon further investigation on the error you are getting , it can be caused when there have been load errors ( i.e. due to bad data that causes errors during loading ) . If using SQLDR specify parameter ERRORS=1000000 then check again
So I'm not sure what that means. If you have any suggestions as to what I need to do to check the error logs that would be great. Thank you 🙂
I messaged the support team at the hosting for the oracle and this is what they said:
As per our check
- There is 34 GB of storage for the logs still available , so low space is not the issue ( We have cleared redo log cache anyways )
- Archive logs are auto cleared by oracle when near 90%
You can connect to oracle using SYS credentials using SID "XE" instead of service name , and run appropriate commands to check for all the redo log space etc.
If you are using sqlldr , please provide us log file name you are using at the time of calling loader so that we can also check from log file for any causes of the issue
I'm not sure what sqlldr is... but I told them I use TALEND for this, and the processes I run about 20 per day... some every hour, some every 30 minutes for 14 hours. And it's hit or miss if it runs good, it's great, but then it runs slow and only processes 1/6 of the lines of data, and I truncate on 4 of the jobs and then insert because of the data that changes.
I just ran the bat file, and this is the results...
value used for ROWS parameter changed from 64 to 55
Table TABLENAME:
36855 Rows successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
Space allocated for bind array: 255420 bytes(55 rows)
Read buffer bytes: 1048576
Total logical records skipped: 0
Total logical records read: 36855
Total logical records rejected: 0
Total logical records discarded: 0
Run began on Tue Oct 11 04:43:55 2022
Run ended on Tue Oct 11 04:45:16 2022
Elapsed time was: 00:01:20.81
CPU time was: 00:00:00.38
-----------------------------------------
It loaded all the data, but there were pauses as it was uploading. No one is on the services right now, and it should go without pauses, but it didn't run 100% smoothly and this is what is concerning. Let me know if any of the above last comments by me help at all..