哈多普Nanenode不会开始 [英] hadoop Nanenode wont start

查看:317
本文介绍了哈多普Nanenode不会开始的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

如果您通过我之前的问题访问此链接:在Linux上安装hadoop2.2.0(NameNode无法启动)



你可能知道!我一直在尝试为hadoop-2.2.0运行单节点模式很长一段时间:D
如果没有访问它,你会发现:)

<最后,在遵循教程之后,我可以格式化namenode,但是当我启动namenode时,我在日志中看到以下错误:

  2014-05-31 15:44:20,587错误org.apache.hadoop.hdfs.server.namenode.NameNode:java.lang.IllegalArgumentException:不包含有效的主机:端口权限:file:/// 
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode。 java:280)
at org.apache.hadoop.hdfs.server.namenode.NameNode。< init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode .NameNode.createNameN ode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)

我对这个解决方案进行了搜索,他们中的大多数都要求检查双重检查并且继续检查 core-site.xml,mapred-site.xml,hdfs-site .xml 我已经完成了所有这些操作,并且它们对我来说看起来非常好。是否有任何线索可能会出错?



更新
文件的位置 / usr / local / hadoop / etc / hadoop
$ b

core-site.xml

 < configuration> 
<属性>
<名称> fs.default.name< /名称>
< value> hdfs:// localhost:9000< / value>
< / property>
< / configuration>

hdfs-site.xml

 < configuration> 
<属性>
< name> dfs.replication< / name>
<值> 1< /值>
< / property>
<属性>
<属性>
<名称> dfs.namenode.name.dir< /名称>
<值>文件:/ usr / local / hadoop / yarn_data / hdfs / namenode< / value>
< / property>
<属性>
< name> dfs.datanode.data.dir< / name>
<值>文件:/ usr / local / hadoop / yarn_data / hdfs / datanode< / value>
< / property>
< / configuration>

mapred-site.xml

 < configuration> 
<属性>
< name> mapreduce.framework.name< / name>
<值>纱线< /值>
< / property>


解决方案

移除文件:来自 dfs.namenode.name.dir dfs.datanode.data.dir 属性的值。正确格式化NameNode并启动守护程序。另外,请确保您对这些目录拥有适当的所有权和权限。



如果您真的想使用 file:方案,请使用 file:// ,这样值就像这样:

  file:/// usr / local / hadoop / yarn_data / hdfs / namenode 
file:/// usr / local / hadoop / yarn_data / hdfs / datanode




if you are visiting this link through my previous question : hadoop2.2.0 installation on linux ( NameNode not starting )

you probably know! I have been trying to run single-node mode for hadoop-2.2.0 for a long time now :D if not visit that and ull find out :)

finally, after following the tutorials I can format the namenode fine , however when I start the namenode I see the following error in the logs:

2014-05-31 15:44:20,587 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: file:///
    at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)

I have googled for the solution , most of them asks to check double check and keep checking core-site.xml , mapred-site.xml , hdfs-site.xml I have done all those and they look absolutely fine to me. Does any one have any clues as to what might be going wrong?

UPDATE location of the files /usr/local/hadoop/etc/hadoop

core-site.xml

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>

hdfs-site.xml

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/yarn_data/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop/yarn_data/hdfs/datanode</value>
</property>
</configuration>

mapred-site.xml

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration

解决方案

Remove file: from the values of dfs.namenode.name.dir and dfs.datanode.data.dir properties . Format the NameNode properly and start the daemons. Also, make sure you have proper ownership and permissions on these directories.

If you really want to use file: scheme then use file://, so that values look like :

file:///usr/local/hadoop/yarn_data/hdfs/namenode
file:///usr/local/hadoop/yarn_data/hdfs/datanode

HTH

这篇关于哈多普Nanenode不会开始的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆