Unlock a world of possibilities! Login now and discover the exclusive benefits awaiting you.
Hi community.
System features:
- Talend Studio Big Data 7.3.1 (Enterprise Windows OS)
- Dynamic Distribution Hadoop: Cloudera 6.3.2
- Spark 2.4
- Jdk 1.8
I'm trying to read a Hive table and insert de record in a new Hive table with spark engine, it's a simple job.
But I have an issue which I can't resolve, because I don't know well what the specific error is. Put log error below:
[INFO ] 08:08:10 org.apache.spark.deploy.yarn.Client-
client token: N/A
diagnostics: Application application_1584601331260_0001 failed 2 times due to AM Container for appattempt_1584601331260_0001_000002 exited with exitCode: 1
Failing this attempt.Diagnostics: [2020-03-19 08:08:08.185]Exception from container-launch.
Container id: container_1584601331260_0001_02_000001
Exit code: 1
Hi again.
We have this issue when we are running a job in local machine but it is running in a cluster hadoop in other machine.
Solution:
Configure the Hadoop Cluster like we can see on the image below. And we have to put the jar manually in the same path as local as in the cluster. In this example we have to create the same path "/data/tmp/talend/hadoop/conf/" in both machines, but you can create the path whatever you want and upload the jar file on both it. In this jar we have the xml files with our cluster hadoop configuration.
Regards!
Hi again.
We have this issue when we are running a job in local machine but it is running in a cluster hadoop in other machine.
Solution:
Configure the Hadoop Cluster like we can see on the image below. And we have to put the jar manually in the same path as local as in the cluster. In this example we have to create the same path "/data/tmp/talend/hadoop/conf/" in both machines, but you can create the path whatever you want and upload the jar file on both it. In this jar we have the xml files with our cluster hadoop configuration.
Regards!