hadoop datanode启动失败 - 配置不正确:namenode地址dfs.namenode.servicerpc-address或dfs.namenode.rpc-address未配置 [英] hadoop datanode startup fail - Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured

查看:1894
本文介绍了hadoop datanode启动失败 - 配置不正确:namenode地址dfs.namenode.servicerpc-address或dfs.namenode.rpc-address未配置的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图用一个namenode和两个datanodes(slave1和slave2)设置Hadoop集群,所以我从Apache Hadoop下载了zip文件,并将其解压缩到datanode的namenode和一个(slave1)中。



因此,我在主/从1中完成了所有配置(格式化namenode),并成功设置了主设备slave1,这意味着我能够提交作业并查看datanode因此,我将整个hadoop安装在slave1中,并将其解压缩到slave2中,并更改了tmp目录和环境变量的一些属性值,例如:http://www.microsoft.com/downloads/details.aspx? JAVA_HOME。我没有碰到core-site.xml中的主URL(fs.defaultFS)。



当我尝试在slave2中启动datanode时,我收到了这个错误消息。

  java.io.IOException:配置错误:namenode地址dfs.namenode.servicerpc-address或dfs.namenode.rpc-address未配置

奇怪的是,我没有在slave1中指定这些属性,并且能够在没有任何问题的情况下在slave1中启动datanode,但它将这个错误引发到slave2中,即使所有配置是相同的。



我发现了与这个问题有关的这些链接,但它在我的环境中不起作用。


  1. javaioioexception-incorrect

  2. DFS-名称节点-servicerpc地址 - 或DFS-名称节点-RPC-地址是 - 不共nfigured

  3. 错误配置名称节点地址-dfs-namenode-rpc-address-is-not-config

我在hadoop上使用hadoop 2.4.1和JDK1.7。



如果有人遇到过这个问题,可以共享一些信息。



谢谢。

解决方案

我的问题:
$ b


  1. export HADOOP_CONF_DIR = $ HADOOP_HOME / etc / hadoop

  2. echo $ HADOOP_CONF_DIR

  3. hdfs namenode -format

  4. hdfs getconf -namenodes

  5. .start-dfs.sh
  6. >

然后,Hadoop可以正常启动。


I am trying to set up Hadoop Cluster with one namenode and two datanodes(slave1 and slave2) so I downloaded the zip file from the Apache Hadoop and unzipped it in the namenode and one(slave1) of the datanodes.

So I made all the configurations(formatting the namenode) in master/slave1 and successfully set up the slave1 with the master which means that I am able to submit a job and see the datanode instance in the admin UI.

So I zipped the whole hadoop installation in the slave1 and unzipped it in the slave2 and changed some property values for tmp directory and environment variables such as JAVA_HOME. I didn't touch the master URL (fs.defaultFS) in the core-site.xml.

When I try to start datanode in slave2, I am getting this error.

java.io.IOException: Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured

It is weird that I didn't specify these properties in the slave1 and am able to start datanode in slave1 without any problem, but it is throwing this error in the slave2 even though all the configurations are same.

I found these links related to this problem, but it doesn't work in my environment.

  1. javaioioexception-incorrect
  2. dfs-namenode-servicerpc-address-or-dfs-namenode-rpc-address-is-not-configured
  3. incorrect-configuration-namenode-address-dfs-namenode-rpc-address-is-not-config

I am using hadoop 2.4.1 and JDK1.7 on centos.

It would be very helpful if someone who have had this problem already figured it out and can share some information.

Thanks.

解决方案

These steps solved the problem for me:

  1. export HADOOP_CONF_DIR = $HADOOP_HOME/etc/hadoop
  2. echo $HADOOP_CONF_DIR
  3. hdfs namenode -format
  4. hdfs getconf -namenodes
  5. .start-dfs.sh

Then, Hadoop can properly started.

这篇关于hadoop datanode启动失败 - 配置不正确:namenode地址dfs.namenode.servicerpc-address或dfs.namenode.rpc-address未配置的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆