hadoop安装和配置问题 [英] hadoop issue in installation and configuration

查看:184
本文介绍了hadoop安装和配置问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述



我搜索了很多,发现WARN是因为我在尝试启动start-dfs.sh时安装hadoop后显示以下错误消息。我使用
UBUNTU 64位操作系统,hadoop是针对32位编译的。
所以这不是一个问题。



但不正确的配置是我担心的事情。也无法启动主要和次要名称节点。

  sameer @ sameer-Compaq-610:〜$ start-dfs.sh 
15/07/27 07: 47:41 WARN util.NativeCodeLoader:无法为您的平台加载native-hadoop库......在适用的情况下使用builtin-java类
不正确的配置:namenode地址dfs.namenode.servicerpc-address或dfs.namenode.rpc地址未配置。
在[]上启动namenodes
localhost:ssh:连接到主机localhost端口22:连接被拒绝
localhost:ssh:连接到主机localhost端口22:连接被拒绝
启动辅助名称节点[0.0.0.0]
0.0.0.0:ssh:连接到主机0.0.0.0端口22:连接被拒绝**
15/07/27 07:47:56 WARN util.NativeCodeLoader:无法加载你的平台的native-hadoop库......在适用的地方使用builtin-java类

我目前的配置:
hdfs-site.xml

 <配置> 

<属性>
< name> dfs.replication< / name>
<值> 1< /值>
< / property>
<属性>
<名称> dfs.namenode.name.dir< /名称>
<值>文件:/ home / sameer / mydata / hdfs / namenode< / value>
< / property>
<属性>
< name> dfs.datanode.data.dir< / name>
<值>文件:/ home / sameer / mydata / hdfs / datanode< / value>
< / property>

< / configuration>

core-site.xml
< configuration>
<属性>
<名称> fs.default.name< /名称>
<值> hdfs:// localhost:9000< / value>
< / property>
< / configuration>

yarn-site.xml
< configuration>

<! - 网站特定的YARN配置属性 - >
<属性>
< name> yarn.nodemanager.aux-services< / name>
< value> mapreduce_shuffle< /值>
< / property>
<属性>
< name> yarn.nodemanager.aux-services.mapreduce.shuffle.class< / name>
< value> org.apache.hadoop.mapred.ShuffleHandler< / value>
< / property>
< / configuration>

mapred-site.xml

<配置>
<属性>
< name> mapreduce.framework.name< / name>
<值>纱线< /值>
< / property>
< / configuration>

请查看我在配置或其他地方做错了什么?



感谢,
Sam

解决方案

一个问题是核心站点中的空间.xml配置。删除fs.default.name属性中的前导空格和尾部空格。

 < configuration> 
<属性>
<名称> fs.default.name< /名称>
< value> hdfs:// localhost:9000< / value>
< / property>
< / configuration>

另一个问题是ssh服务没有在你的机器上运行。这不是一项必要的服务。 start-all.sh,stop-all.sh,start-dfs.sh等脚本需要ssh。所以更好安装。使用以下命令安装并启动open-ssh服务器。

sudo /etc/init.d/ssh restart

然后检查服务是否正在运行。如果它正在运行,则以下命令返回一些结果。

  netstat -tulpn | grep 22 

不使用ssh启动hadoop的另一种方法是使用 hadoop-daemon .sh start [守护进程名称]


After installing hadoop when I am trying to start start-dfs.sh it is showing following error message.

I have searched a lot and found that WARN is because I am using UBUNTU 64bit OS and hadoop is compiled against 32bit. So its not an issue to work on.

But the Incorrect configuration is something I am worried about. And also not able to start the primary and secondary namenodes.

sameer@sameer-Compaq-610:~$ start-dfs.sh
15/07/27 07:47:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on []
localhost: ssh: connect to host localhost port 22: Connection refused
localhost: ssh: connect to host localhost port 22: Connection refused
Starting secondary namenodes [0.0.0.0]
0.0.0.0: ssh: connect to host 0.0.0.0 port 22: Connection refused**
15/07/27 07:47:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

My current configuration: hdfs-site.xml

<configuration>

 <property>
   <name>dfs.replication</name>
   <value>1</value>
 </property>
 <property>
   <name>dfs.namenode.name.dir</name>
   <value>file:/home/sameer/mydata/hdfs/namenode</value>
 </property>
 <property>
   <name>dfs.datanode.data.dir</name>
   <value>file:/home/sameer/mydata/hdfs/datanode</value>
 </property>

</configuration>

core-site.xml
<configuration>
   <property>
      <name>fs.default.name </name>
      <value> hdfs://localhost:9000 </value> 
   </property>
</configuration>

yarn-site.xml
<configuration>

<!-- Site specific YARN configuration properties -->
   <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value> 
   </property>
   <property>
      <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
   </property>
</configuration>

mapred-site.xml

<configuration>
   <property> 
      <name>mapreduce.framework.name</name>
      <value>yarn</value>
   </property>
</configuration>

Please find what I am doing wrong in configuration or somewhere else.?

Thanks, Sam

解决方案

One problem is a space in your core-site.xml configuration. Remove the leading and trailing space in fs.default.name property.

<configuration>
   <property>
      <name>fs.default.name</name>
      <value>hdfs://localhost:9000</value> 
   </property>
</configuration>

Another problem is ssh service is not running in your machine. This is not a necessary service. The start-all.sh, stop-all.sh, start-dfs.sh etc scripts needs ssh. So better to install that. You can install and start open-ssh server with the following command.

sudo apt-get install openssh-server
sudo /etc/init.d/ssh restart

Then check whether the service is running or not. if it is running, the following command returns some result.

netstat -tulpn | grep 22

An alternative to start hadoop without ssh is by using hadoop-daemon.sh start [daemon-name]

这篇关于hadoop安装和配置问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆