I am trying a simple file upload exercise which is also the first exercise in getting started guide talend 7.2.1 pdf. Please see the details below.
Error:
[statistics] connecting to socket on port 3599 [statistics] connected [WARN ]: org.apache.hadoop.hdfs.DFSClient - Abandoning BP-1430972282-10.0.2.15-1581914434997:blk_1073741825_1001 [WARN ]: org.apache.hadoop.hdfs.DFSClient - Excluding datanode DatanodeInfoWithStorage[10.0.2.15:50010,DS-ef8b4d0a-6c72-4f9a-943c-eb456045dde2,DISK] [WARN ]: org.apache.hadoop.hdfs.DFSClient - DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/puccini/getting_started/directors.txt could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
Some set up inputs:
1. Namenode is up. Verified using 127.0.0.1:50070 in cloudera VM, datanode as well is up and running.
2. While creating a hadoop cluster object, I have unchecked the 'Use datanode hostname' property.
3. My hadoop cluster metadata object has namenode URI mentioned as, 'hdfs://localhost:8020'. And this works fine too.
My doubts:
1. Namenode URI works ONLY IF it is set to 'hdfs://localhost:8020'. However, my cloudera VM IP is 10.0.2.15. What could be the reason?
2. If you observe the error above, the IP in 'Excluding datanode DatanodeInfoWithStorage[10.0.2.15:50010,DS-ef8b4d0a-6c72-4f9a-943c-eb456045dde2,DISK]', once was changed to 127.0.0.1. Is this random? Am I overlooking/ignoring any specific setting?
NOTE:
I have tried some of the suggestions posted on community, related to this problem. So far none has worked for me.