将文件从LocalFile复制到hdfs时出现错误信息 [英] Error message while copy file from LocalFile to hdfs

查看:133
本文介绍了将文件从LocalFile复制到hdfs时出现错误信息的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图将文件从本地复制到
hdfs。在/ / user / hduser / hadoop中使用命令
hadoop dfs -copyFromLocal



显示以下错误消息。请帮助找到问题。



DEPRECATED:不推荐使用此脚本执行hdfs命令。
改为使用hdfs命令。

  15/02/02 19:22:23 WARN hdfs.DFSClient :DataStreamer异常
org.apache.hadoop.ipc.RemoteException(java.io.IOException):文件/user/hduser/hadoop._COPYING_只能复制到0个节点而不是minReplication(= 1)。有0个数据节点正在运行,并且在此操作中不包含任何节点。
在org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549)
在org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem。的java:3200)在org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641

。在org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock( ClientNamenodeProtocolServerSideTranslatorPB.java:482)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos $ ClientNamenodeProtocol $ 2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine $ Server $ ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC $ Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:2035)$ b $ at java.security.AccessCont roller.doPrivileged(本地方法)
在javax.security.auth.Subject.doAs(Subject.java:415)
在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628 )$ or $
at org.apache.hadoop.ipc.Server $ Handler.run(Server.java:2033)

at org.apache.hadoop.ipc.Client.call(Client.java :1468)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine $ Invoker.invoke(ProtobufRpcEngine.java:232 )
。在com.sun.proxy。$ Proxy9.addBlock(来源不明)
在org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
。在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)
在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
在java.lang.reflect.Method.invoke(方法。的java:在org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler在org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod 606)
(RetryInvocationHandler.java:187)
。 java:102)
at com.sun.proxy $ Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream $ DataStreamer.locateFollowingBlock(DFSOutputStream.java:1532)
。在org.apache.hadoop.hdfs.DFSOutputStream $ DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1349)
在org.apache.hadoop.hdfs.DFSOutputStream $ DataStreamer.run(DFSOutputStream.java:588)
copyFromLocal:文件/user/hduser/hadoop._COPYING_只能复制到0节点而不是minReplication(= 1)。有0个数据节点正在运行,并且在此操作中不包含任何节点。

我的hdfs-site.xml

 <性> 
< name> dfs.replication< / name>
<值> 1< /值>
< / property>
<属性>
<名称> dfs.namenode.name.dir< /名称>
<值>文件:/ home / hduser / mydata / hdfs / namenode< / value>
< / property>
<属性>
< name> dfs.datanode.data.dir< / name>
<值>文件:/ home / hduser / mydata / hdfs / datanode< / value>
< / property>

jps

  12805 NameNode 
13276 ResourceManager
13398 NodeManager
13081 SecondaryNameNode
14129 Jps


解决方案


我们可以看到你缺少 NameNode DataNode 从jps
输出。

第一站hadoop on hadoop directory

  cd / usr / local / hadoop / sbin 
stop-all.sh

  rm -rf / usr / local / hadoop_store / hdfs / namenode / * 
rm -rf / usr / local / hadoop_store / hdfs / datanode / *
  hadoop namenode -format 

/ code>

重新开始

  start-all.sh 


I tried to copy file from local to hdfs. Using the command hadoop dfs -copyFromLocal in/ /user/hduser/hadoop

The following error message shown. Please help to find the problem.

DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it.

15/02/02 19:22:23 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hduser/hadoop._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.ipc.Client.call(Client.java:1468)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1532)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1349)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
copyFromLocal: File /user/hduser/hadoop._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.

My hdfs-site.xml

<property>
       <name>dfs.replication</name>
       <value>1</value>
     </property>
     <property>
       <name>dfs.namenode.name.dir</name>
       <value>file:/home/hduser/mydata/hdfs/namenode</value>
     </property>
     <property>
       <name>dfs.datanode.data.dir</name>
       <value>file:/home/hduser/mydata/hdfs/datanode</value>
     </property>

jps

12805 NameNode
13276 ResourceManager
13398 NodeManager
13081 SecondaryNameNode
14129 Jps

解决方案

We can see that you have missing NameNode and DataNode from jps output.

First stop hadoop on hadoop directory

cd /usr/local/hadoop/sbin
stop-all.sh

Then remove the contents of NameNode and DataNode

rm -rf /usr/local/hadoop_store/hdfs/namenode/*
rm -rf /usr/local/hadoop_store/hdfs/datanode/*

format NameNode

hadoop namenode -format

start all over again

start-all.sh

这篇关于将文件从LocalFile复制到hdfs时出现错误信息的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆