HDFS 错误:只能复制到 0 个节点,而不是 1 个 [英] HDFS error: could only be replicated to 0 nodes, instead of 1

查看:30
本文介绍了HDFS 错误:只能复制到 0 个节点,而不是 1 个的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在 EC2 中创建了一个 ubuntu 单节点 hadoop 集群.

I've created a ubuntu single node hadoop cluster in EC2.

测试一个简单的文件上传到 hdfs 可以在 EC2 机器上工作,但不能在 EC2 之外的机器上工作.

Testing a simple file upload to hdfs works from the EC2 machine, but doesn't work from a machine outside of EC2.

我可以通过远程机器的 Web 界面浏览文件系统,它显示一个数据节点,报告为服务中.已经打开了安全性中从 0 到 60000(!) 的所有 tcp 端口,所以我不认为是这样.

I can browse the the filesystem through the web interface from the remote machine, and it shows one datanode which is reported as in service. Have opened all tcp ports in the security from 0 to 60000(!) so I don't think it's that.

我收到错误

java.io.IOException: File /user/ubuntu/pies could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1448)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:690)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:342)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1350)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1346)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:742)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1344)

at org.apache.hadoop.ipc.Client.call(Client.java:905)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at $Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy0.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:928)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:811)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427)

namenode 日志只是给出了同样的错误.其他的好像没什么好玩的

namenode log just gives the same error. Others don't seem to have anything interesting

有什么想法吗?

干杯

推荐答案

警告:以下内容将破坏 HDFS 上的所有数据.除非您不关心破坏现有数据,否则不要执行此答案中的步骤!!

你应该这样做:

  1. 停止所有 hadoop 服务
  2. 删除 dfs/name 和 dfs/data 目录
  3. hdfs namenode -format 答案大写 Y
  4. 启动 hadoop 服务

另外,检查系统中的磁盘空间并确保日志没有警告您.

Also, check the diskspace in your system and make sure the logs are not warning you about it.

这篇关于HDFS 错误:只能复制到 0 个节点,而不是 1 个的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆