在core-site.xml中设置fs.default.name将HDFS设置为Safemode [英] Setting fs.default.name in core-site.xml Sets HDFS to Safemode

查看:1809
本文介绍了在core-site.xml中设置fs.default.name将HDFS设置为Safemode的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在一台机器上以伪分布模式安装了Cloudera CDH4发行版,并成功测试了它的工作正常(例如,可以运行MapReduce程序,在Hive服务器上插入数据等)。但是,如果我有机会 core-site.xml 文件将 fs.default.name 设置为机器名称而不是 localhost 并重新启动NameNode服务,HDFS进入安全模式。



在更改 fs.default之前。名称,我运行以下命令来检查HDFS的状态:

  $ hadoop dfsadmin -report 
...
已配置的容量:18503614464(17.23 GB)
现有容量:13794557952(12.85 GB)
剩余的DFS:13790785536(12.84 GB)
使用的DFS: 3772416(3.60 MB)
已使用DFS百分比:0.03%
在复制块中:2
带有损坏副本的块:0
缺失块:0

然后我修改了 core-site.xml hadoop ):


<名称> fs.default.name< /名称>
< value> hdfs:// hadoop:8020< / value>
< / property>

我重新启动了服务并重新生成了报表。

  $ sudo服务hadoop-hdfs-namenode重启
$ hadoop dfsadmin -report
...
安全模式为ON
已配置容量:0(0 B)
现有容量:0(0 B)
剩余DFS:0(0 B)
已用DFS:0(0 B)
已用DFS% :NaN%
在复制块下:0
有损坏副本的块:0
缺失块:0

有趣的是我仍然可以执行一些HDFS命令。例如,我可以运行

  $ hadoop fs -ls / tmp 

但是,如果我尝试使用 hadoop fs -cat 读取文件或尝试将文件放入HDFS,我被告知NameNode处于安全模式。

  $ hadoop fs -put somefile。 
put:无法创建文件/ user / hadinstall / somefile._COPYING_。名称节点处于安全模式。

我需要 fs.default.name 设置为机器名称是因为我需要在端口8020(默认的NameNode端口)上与本机通信。如果 fs.default.name 留给 localhost ,那么NameNode服务将不会侦听外部连接请求。 / p>

我为此感到不知所措,并希望得到任何帮助。

解决方案

这个问题源于域名解析。需要修改 / etc / hosts 文件以指定 hadoop 机器的机器IP地址 localhost 和完全限定域名。

  192.168.0.201 hadoop。 fully.qualified.domain.com localhost 


I installed the Cloudera CDH4 distribution on a single machine in pseudo-distributed mode and successfully tested that it was working correctly (e.g. can run MapReduce programs, insert data on the Hive server, etc.) However, if I chance the core-site.xml file to have fs.default.name set to machine name rather than localhost and restart the NameNode service, the HDFS enters safe-mode.

Before the change of fs.default.name, I ran the following to check the state of the HDFS:

$ hadoop dfsadmin -report
...
Configured Capacity: 18503614464 (17.23 GB)
Present Capacity: 13794557952 (12.85 GB)
DFS Remaining: 13790785536 (12.84 GB)
DFS Used: 3772416 (3.60 MB)
DFS Used%: 0.03%
Under replicated blocks: 2
Blocks with corrupt replicas: 0
Missing blocks: 0

Then I made the modification to core-site.xml (with the machine name being hadoop):

<property>
  <name>fs.default.name</name>
  <value>hdfs://hadoop:8020</value>
</property>

I restarted the service and reran the report.

$ sudo service hadoop-hdfs-namenode restart
$ hadoop dfsadmin -report
...
Safe mode is ON
Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: NaN%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

An interesting note is that I can still perform some HDFS commands. For example, I can run

$ hadoop fs -ls /tmp

However, if I try to read a file using hadoop fs -cat or try to place a file in the HDFS, I am told the NameNode is in safemode.

$ hadoop fs -put somefile .
put: Cannot create file/user/hadinstall/somefile._COPYING_. Name node is in safe mode.

The reason I need the fs.default.name to be set to the machine name is because I need to communicate with this machine on port 8020 (the default NameNode port). If fs.default.name is left to localhost, then the NameNode service will not listen to external connection requests.

I am at a loss as to why this is happening and would appreciate any help.

解决方案

The issue stemmed from domain name resolution. The /etc/hosts file needed to be modified to point the IP address of the machine of the hadoop machine for both localhost and the fully qualified domain name.

192.168.0.201 hadoop.fully.qualified.domain.com localhost

这篇关于在core-site.xml中设置fs.default.name将HDFS设置为Safemode的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆