<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic File upload issue windows to HDFS cloudera 5.13 in Talend Studio</title>
    <link>https://community.qlik.com/t5/Talend-Studio/File-upload-issue-windows-to-HDFS-cloudera-5-13/m-p/2203340#M4697</link>
    <description>&lt;P&gt;Hi community,&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;I am trying a simple file upload exercise which is also the first exercise in getting started guide talend 7.2.1 pdf. Please see the details below.&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&lt;STRONG&gt;Error:&lt;/STRONG&gt;&lt;/P&gt; 
&lt;P&gt;[statistics] connecting to socket on port 3599&lt;BR /&gt;[statistics] connected&lt;BR /&gt;[WARN ]: org.apache.hadoop.hdfs.DFSClient - Abandoning BP-1430972282-10.0.2.15-1581914434997:blk_1073741825_1001&lt;BR /&gt;[WARN ]: org.apache.hadoop.hdfs.DFSClient - Excluding datanode DatanodeInfoWithStorage[10.0.2.15:50010,DS-ef8b4d0a-6c72-4f9a-943c-eb456045dde2,DISK]&lt;BR /&gt;[WARN ]: org.apache.hadoop.hdfs.DFSClient - DataStreamer Exception&lt;BR /&gt;org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/puccini/getting_started/directors.txt could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&lt;STRONG&gt;Some set up inputs:&amp;nbsp;&lt;/STRONG&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;1. Namenode is up. Verified using 127.0.0.1:50070 in cloudera VM, datanode as well is up and running.&lt;/P&gt; 
&lt;P&gt;2. While creating a hadoop cluster object, I have unchecked the 'Use datanode hostname' property.&lt;/P&gt; 
&lt;P&gt;3. My hadoop cluster metadata object has namenode URI mentioned as, 'hdfs://localhost:8020'. And this works fine too.&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&lt;STRONG&gt;My doubts:&lt;/STRONG&gt;&lt;/P&gt; 
&lt;P&gt;1. Namenode URI works ONLY IF it is set to 'hdfs://localhost:8020'. However, my cloudera VM IP is 10.0.2.15. What could be the reason?&lt;/P&gt; 
&lt;P&gt;2. If you observe the error above, the IP in 'Excluding datanode DatanodeInfoWithStorage[&lt;STRONG&gt;10.0.2.15&lt;/STRONG&gt;:50010,DS-ef8b4d0a-6c72-4f9a-943c-eb456045dde2,DISK]', once was changed to 127.0.0.1. Is this random? Am I overlooking/ignoring any specific setting?&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&lt;STRONG&gt;NOTE:&lt;/STRONG&gt;&lt;/P&gt; 
&lt;P&gt;I have tried some of the suggestions posted on community, related to this problem. So far none has worked for me.&lt;/P&gt; 
&lt;P&gt;Your help is much appreciated.&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Sat, 16 Nov 2024 03:15:16 GMT</pubDate>
    <dc:creator>Anonymous</dc:creator>
    <dc:date>2024-11-16T03:15:16Z</dc:date>
    <item>
      <title>File upload issue windows to HDFS cloudera 5.13</title>
      <link>https://community.qlik.com/t5/Talend-Studio/File-upload-issue-windows-to-HDFS-cloudera-5-13/m-p/2203340#M4697</link>
      <description>&lt;P&gt;Hi community,&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;I am trying a simple file upload exercise which is also the first exercise in getting started guide talend 7.2.1 pdf. Please see the details below.&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&lt;STRONG&gt;Error:&lt;/STRONG&gt;&lt;/P&gt; 
&lt;P&gt;[statistics] connecting to socket on port 3599&lt;BR /&gt;[statistics] connected&lt;BR /&gt;[WARN ]: org.apache.hadoop.hdfs.DFSClient - Abandoning BP-1430972282-10.0.2.15-1581914434997:blk_1073741825_1001&lt;BR /&gt;[WARN ]: org.apache.hadoop.hdfs.DFSClient - Excluding datanode DatanodeInfoWithStorage[10.0.2.15:50010,DS-ef8b4d0a-6c72-4f9a-943c-eb456045dde2,DISK]&lt;BR /&gt;[WARN ]: org.apache.hadoop.hdfs.DFSClient - DataStreamer Exception&lt;BR /&gt;org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/puccini/getting_started/directors.txt could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&lt;STRONG&gt;Some set up inputs:&amp;nbsp;&lt;/STRONG&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;1. Namenode is up. Verified using 127.0.0.1:50070 in cloudera VM, datanode as well is up and running.&lt;/P&gt; 
&lt;P&gt;2. While creating a hadoop cluster object, I have unchecked the 'Use datanode hostname' property.&lt;/P&gt; 
&lt;P&gt;3. My hadoop cluster metadata object has namenode URI mentioned as, 'hdfs://localhost:8020'. And this works fine too.&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&lt;STRONG&gt;My doubts:&lt;/STRONG&gt;&lt;/P&gt; 
&lt;P&gt;1. Namenode URI works ONLY IF it is set to 'hdfs://localhost:8020'. However, my cloudera VM IP is 10.0.2.15. What could be the reason?&lt;/P&gt; 
&lt;P&gt;2. If you observe the error above, the IP in 'Excluding datanode DatanodeInfoWithStorage[&lt;STRONG&gt;10.0.2.15&lt;/STRONG&gt;:50010,DS-ef8b4d0a-6c72-4f9a-943c-eb456045dde2,DISK]', once was changed to 127.0.0.1. Is this random? Am I overlooking/ignoring any specific setting?&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&lt;STRONG&gt;NOTE:&lt;/STRONG&gt;&lt;/P&gt; 
&lt;P&gt;I have tried some of the suggestions posted on community, related to this problem. So far none has worked for me.&lt;/P&gt; 
&lt;P&gt;Your help is much appreciated.&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 16 Nov 2024 03:15:16 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/File-upload-issue-windows-to-HDFS-cloudera-5-13/m-p/2203340#M4697</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2024-11-16T03:15:16Z</dc:date>
    </item>
  </channel>
</rss>

