写入HDFS只能复制到0节点而不是minReplication(= 1) [英] Writing to HDFS could only be replicated to 0 nodes instead of minReplication (=1)
问题描述
我有3个数据节点在运行,在运行一个工作时,我得到下面给出的下面的错误,
java.io.IOException :File / user / ashsshar / olhcache / loaderMap9b663bd9只能复制到0节点而不是minReplication(= 1)。此操作中有3个数据节点正在运行,3个节点不在此范围内。
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1325)
这个错误主要出现在我们的DataNode实例的空间不足或者DataNode没有运行时。
我尝试重新启动DataNodes,但仍然收到相同的错误。
dfsadmin -reports在我的群集节点清楚地显示了大量空间可用。
我不确定这是为何发生。
Hadoop守护进程对于`cd /etc/init.d中的x' ls hadoop *`;做sudo服务$ x stop;完成
2.从 / var / lib / hadoop-hdfs中删除所有文件/ cache / hdfs / dfs / name
例如:devan @ Devan-PC:〜$ sudo rm -r / var / lib / hadoop-hdfs / cache /
<格式名称节点
sudo -u hdfs hdfs namenode -format
4.为`cd /etc/init.d中的x启动所有Hadoop守护进程
ls hadoop *`;做sudo服务$ x start;完成
I have 3 data nodes running, while running a job i am getting the following given below error ,
java.io.IOException: File /user/ashsshar/olhcache/loaderMap9b663bd9 could only be replicated to 0 nodes instead of minReplication (=1). There are 3 datanode(s) running and 3 node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1325)
This error mainly comes when our DataNode instances have ran out of space or if DataNodes are not running. I tried restarting the DataNodes but still getting the same error.
dfsadmin -reports at my cluster nodes clearly shows a lots of space is available.
I am not sure why this is happending.
1.Stop all Hadoop daemons
for x in `cd /etc/init.d ; ls hadoop*` ; do sudo service $x stop ; done
2.Remove all files from /var/lib/hadoop-hdfs/cache/hdfs/dfs/name
Eg: devan@Devan-PC:~$ sudo rm -r /var/lib/hadoop-hdfs/cache/
3.Format Namenode
sudo -u hdfs hdfs namenode -format
4.Start all Hadoop daemons
for x in `cd /etc/init.d ; ls hadoop*` ; do sudo service $x start ; done
这篇关于写入HDFS只能复制到0节点而不是minReplication(= 1)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!