写入 HDFS 只能复制到 0 个节点而不是 minReplication (=1) [英] Writing to HDFS could only be replicated to 0 nodes instead of minReplication (=1)

查看:25
本文介绍了写入 HDFS 只能复制到 0 个节点而不是 minReplication (=1)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有 3 个数据节点正在运行,在运行作业时出现以下错误,

I have 3 data nodes running, while running a job i am getting the following given below error ,

java.io.IOException: 文件/user/ashsshar/olhcache/loaderMap9b663bd9 只能复制到 0 个节点而不是 minReplication (=1).有 3 个数据节点正在运行,并且在此操作中排除了 3 个节点.在 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1325)

java.io.IOException: File /user/ashsshar/olhcache/loaderMap9b663bd9 could only be replicated to 0 nodes instead of minReplication (=1). There are 3 datanode(s) running and 3 node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1325)

此错误主要发生在我们的 DataNode 实例空间不足或 DataNode 未运行时.我尝试重新启动 DataNode,但仍然出现相同的错误.

This error mainly comes when our DataNode instances have ran out of space or if DataNodes are not running. I tried restarting the DataNodes but still getting the same error.

我的集群节点上的 dfsadmin -reports 清楚地显示有大量可用空间.

dfsadmin -reports at my cluster nodes clearly shows a lots of space is available.

我不确定为什么会这样.

I am not sure why this is happending.

推荐答案

1.停止所有Hadoop守护进程

1.Stop all Hadoop daemons

for x in `cd /etc/init.d ; ls hadoop*` ; do sudo service $x stop ; done

2.从/var/lib/hadoop-hdfs/cache/hdfs/dfs/name中删除所有文件

Eg: devan@Devan-PC:~$ sudo rm -r /var/lib/hadoop-hdfs/cache/

3.格式化Namenode

3.Format Namenode

sudo -u hdfs hdfs namenode -format

4.启动所有Hadoop守护进程

4.Start all Hadoop daemons

for x in `cd /etc/init.d ; ls hadoop*` ; do sudo service $x start ; done

停止所有 Hadoop 服务

这篇关于写入 HDFS 只能复制到 0 个节点而不是 minReplication (=1)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆