<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: java.io.IOException: DataStreamer Exception for 7.1.1 in Talend Studio</title>
    <link>https://community.qlik.com/t5/Talend-Studio/java-io-IOException-DataStreamer-Exception-for-7-1-1/m-p/2254682#M37619</link>
    <description>&lt;P&gt;I tried to&amp;nbsp;&lt;SPAN&gt;unticking the "Use Datanode Hostname" tickbox, but the new issue displayed.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;Starting job ReadWriteHDFS at 13:55 19/03/2019.&lt;/P&gt; 
&lt;P&gt;[statistics] connecting to socket on port 3544&lt;BR /&gt;[statistics] connected&lt;BR /&gt;[WARN ]: org.apache.hadoop.hdfs.DFSClient - DataStreamer Exception&lt;BR /&gt;org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/andy/andytest.text could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.&lt;BR /&gt;at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1719)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3372)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3296)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:850)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:504)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)&lt;BR /&gt;at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)&lt;BR /&gt;at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422)&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)&lt;/P&gt; 
&lt;P&gt;at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1554)&lt;BR /&gt;at org.apache.hadoop.ipc.Client.call(Client.java:1498)&lt;BR /&gt;at org.apache.hadoop.ipc.Client.call(Client.java:1398)&lt;BR /&gt;at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)&lt;BR /&gt;at com.sun.proxy.$Proxy10.addBlock(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:459)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)&lt;BR /&gt;at java.lang.reflect.Method.invoke(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:291)&lt;BR /&gt;at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:203)&lt;BR /&gt;at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:185)&lt;BR /&gt;at com.sun.proxy.$Proxy11.addBlock(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1568)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1363)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:552)&lt;BR /&gt;Exception in component tHDFSOutput_1 (ReadWriteHDFS)&lt;BR /&gt;org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/andy/andytest.text could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.&lt;BR /&gt;at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1719)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3372)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3296)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:850)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:504)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)&lt;BR /&gt;at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)&lt;BR /&gt;at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422)&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)&lt;/P&gt; 
&lt;P&gt;at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1554)&lt;BR /&gt;at org.apache.hadoop.ipc.Client.call(Client.java:1498)&lt;BR /&gt;at org.apache.hadoop.ipc.Client.call(Client.java:1398)&lt;BR /&gt;at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)&lt;BR /&gt;at com.sun.proxy.$Proxy10.addBlock(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:459)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)&lt;BR /&gt;at java.lang.reflect.Method.invoke(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:291)&lt;BR /&gt;at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:203)&lt;BR /&gt;at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:185)&lt;BR /&gt;at com.sun.proxy.$Proxy11.addBlock(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1568)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1363)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:552)&lt;BR /&gt;[statistics] disconnected&lt;BR /&gt;[ERROR]: org.apache.hadoop.hdfs.DFSClient - Failed to close inode 60675&lt;BR /&gt;org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/andy/andytest.text could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.&lt;BR /&gt;at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1719)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3372)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3296)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:850)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:504)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)&lt;BR /&gt;at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)&lt;BR /&gt;at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422)&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)&lt;/P&gt; 
&lt;P&gt;at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1554)&lt;BR /&gt;at org.apache.hadoop.ipc.Client.call(Client.java:1498)&lt;BR /&gt;at org.apache.hadoop.ipc.Client.call(Client.java:1398)&lt;BR /&gt;at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)&lt;BR /&gt;at com.sun.proxy.$Proxy10.addBlock(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:459)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)&lt;BR /&gt;at java.lang.reflect.Method.invoke(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:291)&lt;BR /&gt;at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:203)&lt;BR /&gt;at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:185)&lt;BR /&gt;at com.sun.proxy.$Proxy11.addBlock(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1568)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1363)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:552)&lt;/P&gt; 
&lt;P&gt;Job ReadWriteHDFS ended at 13:55 19/03/2019. [exit code=1]&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Tue, 19 Mar 2019 06:34:31 GMT</pubDate>
    <dc:creator>Anonymous</dc:creator>
    <dc:date>2019-03-19T06:34:31Z</dc:date>
    <item>
      <title>java.io.IOException: DataStreamer Exception for 7.1.1</title>
      <link>https://community.qlik.com/t5/Talend-Studio/java-io-IOException-DataStreamer-Exception-for-7-1-1/m-p/2254679#M37616</link>
      <description>&lt;P&gt;Hi -&lt;BR /&gt;I have TOS DB 7.1.1 installed on my local machine and connecting to&amp;nbsp;Hortonworks Sandbox in VM. When trying HDFSOutput component - the target file being created with 0 bytes with no actual data and throwing below DataStream error. Could you please help me to rectify this.&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;Starting job ReadWriteHDFS at 11:41 14/03/2019.&lt;/P&gt; 
&lt;P&gt;[statistics] connecting to socket on port 3829&lt;BR /&gt;[statistics] connected&lt;BR /&gt;[WARN ]: org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable&lt;BR /&gt;Exception in component tHDFSOutput_1 (ReadWriteHDFS)&lt;BR /&gt;[WARN ]: org.apache.hadoop.hdfs.DFSClient - DataStreamer Exception&lt;BR /&gt;java.nio.channels.UnresolvedAddressException&lt;BR /&gt;at sun.nio.ch.Net.checkAddress(Unknown Source)&lt;BR /&gt;at sun.nio.ch.SocketChannelImpl.connect(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)&lt;BR /&gt;at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1681)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1421)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1374)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:552)&lt;BR /&gt;java.io.IOException: DataStreamer Exception:&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:666)&lt;BR /&gt;Caused by: java.nio.channels.UnresolvedAddressException&lt;BR /&gt;at sun.nio.ch.Net.checkAddress(Unknown Source)&lt;BR /&gt;at sun.nio.ch.SocketChannelImpl.connect(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)&lt;BR /&gt;at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1681)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1421)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1374)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:552)&lt;BR /&gt;[statistics] disconnected&lt;BR /&gt;[ERROR]: org.apache.hadoop.hdfs.DFSClient - Failed to close inode 50803&lt;BR /&gt;java.io.IOException: DataStreamer Exception:&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:666)&lt;BR /&gt;Caused by: java.nio.channels.UnresolvedAddressException&lt;BR /&gt;at sun.nio.ch.Net.checkAddress(Unknown Source)&lt;BR /&gt;at sun.nio.ch.SocketChannelImpl.connect(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)&lt;BR /&gt;at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1681)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1421)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1374)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:552)&lt;/P&gt; 
&lt;P&gt;Job ReadWriteHDFS ended at 11:41 14/03/2019. [exit code=1]&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&lt;SPAN class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Capture.PNG" style="width: 999px;"&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="0683p000009M3PC.png"&gt;&lt;img src="https://community.qlik.com/t5/image/serverpage/image-id/127785iA7B7BEF7324D3708/image-size/large?v=v2&amp;amp;px=999" role="button" title="0683p000009M3PC.png" alt="0683p000009M3PC.png" /&gt;&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 16 Nov 2024 06:20:30 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/java-io-IOException-DataStreamer-Exception-for-7-1-1/m-p/2254679#M37616</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2024-11-16T06:20:30Z</dc:date>
    </item>
    <item>
      <title>Re: java.io.IOException: DataStreamer Exception for 7.1.1</title>
      <link>https://community.qlik.com/t5/Talend-Studio/java-io-IOException-DataStreamer-Exception-for-7-1-1/m-p/2254680#M37617</link>
      <description>&lt;P&gt;&lt;A href="https://community.qlik.com/s/profile/0053p000007LPMJAA4"&gt;@AndyLi&lt;/A&gt;&amp;nbsp;,below link may help you.&lt;/P&gt; 
&lt;P&gt;&lt;A href="https://community.qlik.com/s/feed/0D53p00007vCpeZCAS" target="_blank"&gt;https://community.talend.com/t5/Design-and-Development/resolved-Simple-tHDFSPut-based-job-fails-on-6-1-but-not-6-0/m-p/103608&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 14 Mar 2019 05:56:51 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/java-io-IOException-DataStreamer-Exception-for-7-1-1/m-p/2254680#M37617</guid>
      <dc:creator>manodwhb</dc:creator>
      <dc:date>2019-03-14T05:56:51Z</dc:date>
    </item>
    <item>
      <title>Re: java.io.IOException: DataStreamer Exception for 7.1.1</title>
      <link>https://community.qlik.com/t5/Talend-Studio/java-io-IOException-DataStreamer-Exception-for-7-1-1/m-p/2254681#M37618</link>
      <description>&lt;P&gt;Try unticking the "Use Datanode Hostname" tickbox and retry. This is what&amp;nbsp;&lt;A href="https://community.qlik.com/s/profile/0053p000007LKmJAAW"&gt;@manodwhb&lt;/A&gt;&amp;nbsp;was suggesting.&lt;/P&gt;</description>
      <pubDate>Mon, 18 Mar 2019 09:37:10 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/java-io-IOException-DataStreamer-Exception-for-7-1-1/m-p/2254681#M37618</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2019-03-18T09:37:10Z</dc:date>
    </item>
    <item>
      <title>Re: java.io.IOException: DataStreamer Exception for 7.1.1</title>
      <link>https://community.qlik.com/t5/Talend-Studio/java-io-IOException-DataStreamer-Exception-for-7-1-1/m-p/2254682#M37619</link>
      <description>&lt;P&gt;I tried to&amp;nbsp;&lt;SPAN&gt;unticking the "Use Datanode Hostname" tickbox, but the new issue displayed.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;Starting job ReadWriteHDFS at 13:55 19/03/2019.&lt;/P&gt; 
&lt;P&gt;[statistics] connecting to socket on port 3544&lt;BR /&gt;[statistics] connected&lt;BR /&gt;[WARN ]: org.apache.hadoop.hdfs.DFSClient - DataStreamer Exception&lt;BR /&gt;org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/andy/andytest.text could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.&lt;BR /&gt;at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1719)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3372)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3296)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:850)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:504)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)&lt;BR /&gt;at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)&lt;BR /&gt;at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422)&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)&lt;/P&gt; 
&lt;P&gt;at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1554)&lt;BR /&gt;at org.apache.hadoop.ipc.Client.call(Client.java:1498)&lt;BR /&gt;at org.apache.hadoop.ipc.Client.call(Client.java:1398)&lt;BR /&gt;at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)&lt;BR /&gt;at com.sun.proxy.$Proxy10.addBlock(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:459)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)&lt;BR /&gt;at java.lang.reflect.Method.invoke(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:291)&lt;BR /&gt;at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:203)&lt;BR /&gt;at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:185)&lt;BR /&gt;at com.sun.proxy.$Proxy11.addBlock(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1568)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1363)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:552)&lt;BR /&gt;Exception in component tHDFSOutput_1 (ReadWriteHDFS)&lt;BR /&gt;org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/andy/andytest.text could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.&lt;BR /&gt;at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1719)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3372)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3296)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:850)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:504)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)&lt;BR /&gt;at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)&lt;BR /&gt;at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422)&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)&lt;/P&gt; 
&lt;P&gt;at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1554)&lt;BR /&gt;at org.apache.hadoop.ipc.Client.call(Client.java:1498)&lt;BR /&gt;at org.apache.hadoop.ipc.Client.call(Client.java:1398)&lt;BR /&gt;at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)&lt;BR /&gt;at com.sun.proxy.$Proxy10.addBlock(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:459)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)&lt;BR /&gt;at java.lang.reflect.Method.invoke(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:291)&lt;BR /&gt;at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:203)&lt;BR /&gt;at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:185)&lt;BR /&gt;at com.sun.proxy.$Proxy11.addBlock(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1568)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1363)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:552)&lt;BR /&gt;[statistics] disconnected&lt;BR /&gt;[ERROR]: org.apache.hadoop.hdfs.DFSClient - Failed to close inode 60675&lt;BR /&gt;org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/andy/andytest.text could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.&lt;BR /&gt;at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1719)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3372)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3296)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:850)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:504)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)&lt;BR /&gt;at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)&lt;BR /&gt;at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:422)&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)&lt;/P&gt; 
&lt;P&gt;at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1554)&lt;BR /&gt;at org.apache.hadoop.ipc.Client.call(Client.java:1498)&lt;BR /&gt;at org.apache.hadoop.ipc.Client.call(Client.java:1398)&lt;BR /&gt;at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)&lt;BR /&gt;at com.sun.proxy.$Proxy10.addBlock(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:459)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)&lt;BR /&gt;at java.lang.reflect.Method.invoke(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:291)&lt;BR /&gt;at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:203)&lt;BR /&gt;at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:185)&lt;BR /&gt;at com.sun.proxy.$Proxy11.addBlock(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1568)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1363)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:552)&lt;/P&gt; 
&lt;P&gt;Job ReadWriteHDFS ended at 13:55 19/03/2019. [exit code=1]&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt; 
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 19 Mar 2019 06:34:31 GMT</pubDate>
      <guid>https://community.qlik.com/t5/Talend-Studio/java-io-IOException-DataStreamer-Exception-for-7-1-1/m-p/2254682#M37619</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2019-03-19T06:34:31Z</dc:date>
    </item>
  </channel>
</rss>

