Do not input private or sensitive data. View Qlik Privacy & Cookie Policy.
Skip to main content

Announcements
Join us to spark ideas for how to put the latest capabilities into action. Register here!
cancel
Showing results for 
Search instead for 
Did you mean: 
Anonymous
Not applicable

MapReduce Job - Getting server.namenode.LeaseExpiredException Error

I run ten bz2 files which is each 200mb, My map reduce job failed with error below. My job is manage to run with one or 2 files. Any idea what settings am i missing ?
No lease on /user/cloudera/Messenger_Demo/DS7/20150731200055919/SUCCESS/part-00000.avro (inode 24546): File does not exist.
15/07/31 21:53:51 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
15/07/31 21:53:51 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
15/07/31 21:53:57 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
15/07/31 21:53:57 INFO mapred.FileInputFormat: Total input paths to process : 10
15/07/31 21:53:57 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
15/07/31 21:53:57 INFO net.NetworkTopology: Adding a new node: /default/127.0.0.1:50010
15/07/31 21:53:57 INFO mapreduce.JobSubmitter: number of splits:10
15/07/31 21:53:57 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1438398605238_0010
15/07/31 21:53:58 INFO impl.YarnClientImpl: Submitted application application_1438398605238_0010
15/07/31 21:53:58 INFO mapreduce.Job: The url to track the job:
15/07/31 21:53:58 INFO Configuration.deprecation: jobclient.output.filter is deprecated. Instead, use mapreduce.client.output.filter
Running job: job_1438398605238_0010
 map 0% reduce 0%
 map 10% reduce 0%
 map 30% reduce 0%
 map 40% reduce 0%
 map 50% reduce 0%
 map 100% reduce 0%
Job complete: job_1438398605238_0010
Counters: 32
    File System Counters
        FILE: Number of bytes read=0
        FILE: Number of bytes written=789970
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=1431459923
        HDFS: Number of bytes written=5565119895
        HDFS: Number of read operations=25
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=20
    Job Counters
        Failed map tasks=1
        Killed map tasks=5
        Launched map tasks=11
        Data-local map tasks=11
        Total time spent by all maps in occupied slots (ms)=2338547
        Total time spent by all reduces in occupied slots (ms)=0
        Total time spent by all map tasks (ms)=2338547
        Total vcore-seconds taken by all map tasks=2338547
        Total megabyte-seconds taken by all map tasks=2394672128
    Map-Reduce Framework
        Map input records=20000000
        Map output records=0
        Input split bytes=1698
        Spilled Records=0
        Failed Shuffles=0
        Merged Map outputs=0
        GC time elapsed (ms)=14717
        CPU time spent (ms)=936940
        Physical memory (bytes) snapshot=1091555328
        Virtual memory (bytes) snapshot=4501733376
        Total committed heap usage (bytes)=986185728
    File Input Format Counters
        Bytes Read=0
    File Output Format Counters
        Bytes Written=0
Job Failed: Task failed task_1438398605238_0010_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
java.io.IOException: Job failed
    at org.talend.hadoop.mapred.lib.MRJobClient.runJob(MRJobClient.java:154)
    at ds7.ds7_mapreduce_test_0_1.DS7_MapReduce_Test.runMRJob(DS7_MapReduce_Test.java:2029)
    at ds7.ds7_mapreduce_test_0_1.DS7_MapReduce_Test.access$1(DS7_MapReduce_Test.java:2019)
    at ds7.ds7_mapreduce_test_0_1.DS7_MapReduce_Test$1.run(DS7_MapReduce_Test.java:1854)
    at ds7.ds7_mapreduce_test_0_1.DS7_MapReduce_Test$1.run(DS7_MapReduce_Test.java:1)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
    at ds7.ds7_mapreduce_test_0_1.DS7_MapReduce_Test.tHDFSInput_1Process(DS7_MapReduce_Test.java:1748)
    at ds7.ds7_mapreduce_test_0_1.DS7_MapReduce_Test.run(DS7_MapReduce_Test.java:1997)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at ds7.ds7_mapreduce_test_0_1.DS7_MapReduce_Test.runJobInTOS(DS7_MapReduce_Test.java:1955)
    at ds7.ds7_mapreduce_test_0_1.DS7_MapReduce_Test.main(DS7_MapReduce_Test.java:1940)
disconnected

When i checked my Job history it gives an error below :
Error: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /user/cloudera/Messenger_Demo/DS7/20150731200055919/SUCCESS/part-00000.avro (inode 24546): File does not exist. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3319) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3407) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3377) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:673) at
Labels (4)
3 Replies
Anonymous
Not applicable
Author

Ok. I think I solved this issue by unticking the option "Compress immediate map output to reduce network traffic." at Hadoop Configuration of the MapReduce job. After this i get the following error instead :
Error: Java heap space Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143
How to solve this ?
Anonymous
Not applicable
Author

Ok, I found out the root cause.
I am using a tmap in my MapReduce Job. When i use one output link it is working fine. but when i am using 2 output links, this error occurs. Any setting i need to set ?
Anonymous
Not applicable
Author

 

What was the final solution for this issue ?

Ok, I found out the root cause. 
I am using a tmap in my Map-reduce Job. When i use one output link it is working fine. but when i am using 2 output links, this error occurs. Any setting i need to set ?