Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hello,
I have a Talend job running in a Docker container. It uses multithreading (Iterate) with a database query for each thread (pagination).
After some time (random) it just stop working. There are no interaction with the database and there are no errors in the docker, the job does not finish, it just stand there doing nothing.
There is an update in the same table of the pagination query, but it is not suppose to have this effect.
Here is my job.
The DBConnection is a shared connection.
The TOTAL DBInput is in charge to know the row quantity to paginate:
The nombreThreads tLoop an the iterate connection is to use the multithreading:
The RETABL_MASSIF_FLAT DBInput is to paginate:
and the updateFlat DBOutput is to update the table with the tJavaRow response:
Can someone have any ideas? Anything.
Thank you very much for your help.
Hello,
In which talend build version you got this issue? Is there any error message in job log and studio log?
Studio log is located in <Talend Studio installation path>/workspace/.metadata/.log
Job log, please click Window to open the menu, then select Show View->General->Error Log. Once you see an error, double click it. The error log generated automatically by the Studio.
Best regards
Sabrina
Hello Sabrina,
Thank you for your reply.
I'm using Talend Open Studio Data Integration but the incident does not happen in the IDE, it happens in a docker container, and that's the thing there's no log anywhere, the job does not even finished, the just stand there doing nothing.
Hello,
Are you using docker as job server for talend open studio? Is it possible to get error message from docker system?
Best regards
Sabrina
Hello Sabrina,
Sorry for the late response, it is the holydays.
No, the docker container executes only the jvm, and there are no erros messages in the docker container.
Thanks.
Hello,
Are you able to execute this job in studio successfully without using docker container?
Best regards
Sabrina
The problem does not comes from the docker container. For me it looks like you cause a database dead lock.
You should check while the job is running for blockades in the database. I would suggest you use the Oracle enterprise manager to find blockades. Unfortunately I am not familiar with the Oracle database in this aspect, but a friendly administrator can show you how it works.
Hi, thank you for your response, and no. There are no dead locks. I already checked it with the DBA.
hello,
Yes it works, but the problem is that i don't have the data quantity needed in my dev environment.
It is weird, in the environment with the error there are almost 400.000 rows to process (it is not a lot), and in my dev environment there 8.000
Bye