Hadoop:连接到 ResourceManager 失败 [英] Hadoop: Connecting to ResourceManager failed

查看:113
本文介绍了Hadoop:连接到 ResourceManager 失败的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在安装 hadoop 2.2 并尝试启动管道示例后,我得到了以下错误(尝试启动后出现相同的错误 hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount someFile.txt/out):

/usr/local/hadoop$ hadoop 管道 -Dhadoop.pipes.java.recordreader=true -Dhadoop.pipes.java.recordwriter=true -input someFile.txt -output/out -program bin/wordcount已弃用:不推荐使用此脚本来执行 mapred 命令.而是使用 mapred 命令.13/12/14 20:12:06 INFO client.RMProxy:在/0.0.0.0:8032 连接到 ResourceManager13/12/14 20:12:06 INFO client.RMProxy:在/0.0.0.0:8032 连接到 ResourceManager13/12/14 20:12:07 信息 ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:8032.已尝试0次;重试策略是 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)13/12/14 20:12:08 信息 ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:8032.已经尝试过1次;重试策略是 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)13/12/14 20:12:09 信息 ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:8032.已经试过2次;重试策略是 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)13/12/14 20:12:10 信息 ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:8032.已经试过3次;重试策略是 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)13/12/14 20:12:11 信息 ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:8032.已经尝试了4次;重试策略是 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)13/12/14 20:12:12 信息 ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:8032.已经尝试了5次;重试策略是 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)13/12/14 20:12:13 信息 ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:8032.已经尝试了6次;重试策略是 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)13/12/14 20:12:14 信息 ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:8032.已经尝试了7次;重试策略是 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

我的yarn-site.xml:

<预><代码><配置><财产><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></属性><财产><name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name><value>org.apache.hadoop.mapred.ShuffleHandler</value></属性><!-- 站点特定的 YARN 配置属性 --></配置>

core-site.xml:

<预><代码><配置><财产><name>fs.default.name</name><value>hdfs://localhost:9000</value></属性></配置>

mapred-site.xml:

<预><代码><配置><财产><name>mapreduce.framework.name</name><value>纱线</value></属性></配置>

hdfs-site.xml:

<预><代码><配置><财产><name>dfs.replication</name><值>1</值></属性><财产><name>dfs.namenode.name.dir</name><value>文件:/home/hduser/mydata/hdfs/namenode</value></属性><财产><name>dfs.datanode.data.dir</name><value>文件:/home/hduser/mydata/hdfs/datanode</value></属性></配置>

我发现我的 IPv6 已按原样禁用.也许我的/etc/hosts 不正确?

/etc/hosts:

fe00::0 ip6-localnetff00::0 ip6-mcastprefixff02::1 ip6-allnodesff02::2 ip6-allrouters127.0.0.1 localhost.localdomain 本地主机 hduser# 自动生成的主机名.请不要删除此评论.79.98.30.76 356114.s.dedikuoti.lt 356114::1 本地主机 ip6-本地主机 ip6-环回

解决方案

连接资源管理器的问题是因为我需要在yarn-site.xml中添加一些属性:

<name>yarn.resourcemanager.address</name><value>127.0.0.1:8032</value></属性><财产><name>yarn.resourcemanager.scheduler.address</name><value>127.0.0.1:8030</value></属性><财产><name>yarn.resourcemanager.resource-tracker.address</name><value>127.0.0.1:8031</value></属性>

然而,我的工作还没有运行,但现在连接成功

After installing hadoop 2.2 and trying to launch pipes example ive got the folowing error (the same error shows up after trying to launch hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount someFile.txt /out):

/usr/local/hadoop$ hadoop pipes -Dhadoop.pipes.java.recordreader=true -Dhadoop.pipes.java.recordwriter=true -input someFile.txt -output /out -program bin/wordcount
DEPRECATED: Use of this script to execute mapred command is deprecated.
Instead use the mapred command for it.

13/12/14 20:12:06 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
13/12/14 20:12:06 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
13/12/14 20:12:07 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:08 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:09 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:10 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:11 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:12 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:13 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:14 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

My yarn-site.xml:

<configuration>
<property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce_shuffle</value>
</property>
<property>
  <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
  <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<!-- Site specific YARN configuration properties -->
</configuration>

core-site.xml:

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>

mapred-site.xml:

<configuration>
<property>
  <name>mapreduce.framework.name</name>
  <value>yarn</value>
</property>
</configuration>

hdfs-site.xml:

<configuration>
<property>
  <name>dfs.replication</name>
  <value>1</value>
</property>
<property>
  <name>dfs.namenode.name.dir</name>
  <value>file:/home/hduser/mydata/hdfs/namenode</value>
</property>
<property>
  <name>dfs.datanode.data.dir</name>
  <value>file:/home/hduser/mydata/hdfs/datanode</value>
</property>
</configuration>

Ive figured out that my IPv6 is disabled as it should be. Maybe my /etc/hosts are not correct?

/etc/hosts:

fe00::0         ip6-localnet
ff00::0         ip6-mcastprefix
ff02::1         ip6-allnodes
ff02::2         ip6-allrouters

127.0.0.1 localhost.localdomain localhost hduser
# Auto-generated hostname. Please do not remove this comment.
79.98.30.76 356114.s.dedikuoti.lt  356114
::1             localhost ip6-localhost ip6-loopback

解决方案

The problem connecting recource manager was because ive needed to add a few properties to yarn-site.xml :

<property>
<name>yarn.resourcemanager.address</name>
<value>127.0.0.1:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>127.0.0.1:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>127.0.0.1:8031</value>
</property>

Yet, my Jobs arent runing but connecting is successful now

这篇关于Hadoop:连接到 ResourceManager 失败的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆