Hello,
I am using Talend Fabric 6.2.1 and hortonworks 2.4.
I am trying to launch the demo job of big data batch Job MR_count_code, I changed the configuration to adapt it to my virtual machine which installed hortonworks,
When i launched a standard job, it worked, but in a big data batch job always have this error
connecting to socket on port 3774
connected
: org.apache.hadoop.mapreduce.JobResourceUploader - No job jar file set. User classes may not be found. See Job or Job#setJar(String).
Running job: job_1478076384405_0018
map 0% reduce 0%
Job complete: job_1478076384405_0018
Counters: 0
Job Failed: Application application_1478076384405_0018 failed 2 times due to AM Container for appattempt_1478076384405_0018_000002 exited with exitCode: 255
For more detailed output, check application tracking page:, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_e02_1478076384405_0018_02_000001
Exit code: 255
Stack trace: ExitCodeException exitCode=255:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:576)
at org.apache.hadoop.util.Shell.run(Shell.java:487)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:303)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 255
Failing this attempt. Failing the application.
java.io.IOException: Job failed!
at org.talend.hadoop.mapred.lib.MRJobClient.runJob(MRJobClient.java:166)
at demo.mr_count_code_0_1.MR_Count_Code.runMRJob(MR_Count_Code.java:2804)
at demo.mr_count_code_0_1.MR_Count_Code.access$1(MR_Count_Code.java:2794)
at demo.mr_count_code_0_1.MR_Count_Code$1.run(MR_Count_Code.java:2615)
at demo.mr_count_code_0_1.MR_Count_Code$1.run(MR_Count_Code.java:1)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at demo.mr_count_code_0_1.MR_Count_Code.tHDFSInput_1Process(MR_Count_Code.java:2530)
at demo.mr_count_code_0_1.MR_Count_Code.run(MR_Count_Code.java:2772)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at demo.mr_count_code_0_1.MR_Count_Code.runJobInTOS(MR_Count_Code.java:2711)
at demo.mr_count_code_0_1.MR_Count_Code.main(MR_Count_Code.java:2690)
disconnected
Job MR_Count_Code terminé à 13:19 02/11/2016.
Thanks you, for your respond,
2016-11-02 17:42:20,184 FATAL org.apache.hadoop.yarn.event.AsyncDispatcher: Error in dispatcher thread
java.lang.IllegalArgumentException: Unable to parse '/hdp/apps/${hdp.version}/mapreduce/mapreduce.tar.gz#mr-framework' as a URI, check the setting for mapreduce.application.framework.path
at org.apache.hadoop.mapreduce.v2.util.MRApps.getMRFrameworkName(MRApps.java:181)
at org.apache.hadoop.mapreduce.v2.util.MRApps.setMRFrameworkClasspath(MRApps.java:206)
at org.apache.hadoop.mapreduce.v2.util.MRApps.setClasspath(MRApps.java:258)
at org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.getInitialClasspath(TaskAttemptImpl.java:621)
at org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.createCommonContainerLaunchContext(TaskAttemptImpl.java:757)
at org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.createContainerLaunchContext(TaskAttemptImpl.java:821)
at org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$ContainerAssignedTransition.transition(TaskAttemptImpl.java:1557)
at org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$ContainerAssignedTransition.transition(TaskAttemptImpl.java:1534)
at org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362)
at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1084)
at org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:145)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1368)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1360)
at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:183)
at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:109)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.URISyntaxException: Illegal character in path at index 11: /hdp/apps/${hdp.version}/mapreduce/mapreduce.tar.gz#mr-framework
at java.net.URI$Parser.fail(URI.java:2848)
at java.net.URI$Parser.checkChars(URI.java:3021)
at java.net.URI$Parser.parseHierarchical(URI.java:3105)
at java.net.URI$Parser.parse(URI.java:3063)
at java.net.URI.<init>(URI.java:588)
at org.apache.hadoop.mapreduce.v2.util.MRApps.getMRFrameworkName(MRApps.java:179)
... 18 more
This is an issue with Hortonworks. The $hdp.version is not resolved by most MR clients.
Set the mapreduce.application.classpath explcitly inside talend job and dont use the $hdp.version, use the actual value.
could you please advise on this. I am not able to find or locate where to explicitly set the value.
I have checked the code and I am not able to find the ${hdp.version} variable and path setting for mapreduce.application.classpath .
Hi Amula, yes, i was able to set up the value in hadopp configruation /hadoop property. Now version issue is resolved , but i am facing other issue.
Diagnostics: Exception from container-launch.
Container id: container_e38_1481903964317_0019_02_000001
Exit code: 1
Stack trace: org.apache.hadoop.yarn.server.nodemanager.containermanager.runtime.ContainerExecutionException: Launch container failed
at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime.launchContainer(DefaultLinuxContainerRuntime.java:109)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DelegatingLinuxContainerRuntime.launchContainer(DelegatingLinuxContainerRuntime.java:89)
at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:392)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:317)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:83)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Shell output: main : command provided 1
main : run as user is xxxxx
main : requested yarn user is xxxxx
Getting exit code file...
Creating script paths...
Writing pid file...
Writing to tmp file /data/c/hadoop/yarn/local/nmPrivate/application_1481903964317_0019/container_e38_1481903964317_0019_02_000001/container_e38_1481903964317_0019_02_000001.pid.tmp
Writing to cgroup task files...
Creating local dirs...
Launching container...
Getting exit code file...
Creating script paths...
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.