ConnectException:在Hadoop中运行mapreduce时拒绝连接 [英] ConnectException: Connection refused when run mapreduce in Hadoop

查看:2412
本文介绍了ConnectException:在Hadoop中运行mapreduce时拒绝连接的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我用多机器模式设置Hadoop(2.6.0):1个namenode + 3个datanode。当我使用命令:start-all.sh,他们(namenode,datanode,资源管理器,节点管理器)工作确定。我用jps命令检查它,并且每个节点上的结果如下:

I set up Hadoop(2.6.0) with multi machines mode : 1 namenode + 3 datanodes. When I used command : start-all.sh, they (namenode, datanode, resource manager, node manager) worked ok. I checked it with jps command and result on each node were bellow:

NameNode:


7300 ResourceManager

7300 ResourceManager

6942 NameNode

6942 NameNode

7154 SecondaryNameNode

7154 SecondaryNameNode

DataNode:


3840 DataNode

3840 DataNode

3924 NodeManager

3924 NodeManager

我也在HDFS上传了示例文本文件:/user/hadoop/data/sample.txt。

And I also uploaded sample text file on HDFS at: /user/hadoop/data/sample.txt. Absolutely no error at that moment.

但是当我试图运行一个mapreduce与hadoop示例的jar:

But when I tried to run a mapreduce with hadoop example's jar :


hadoop jar hadoop-mapreduce-examples-2.6.0.jar wordcount /user/hadoop/data/sample.txt / user / hadoop / output

hadoop jar hadoop-mapreduce-examples-2.6.0.jar wordcount /user/hadoop/data/sample.txt /user/hadoop/output

我有这个错误:

15/04/08 03:31:26 INFO mapreduce.Job: Job job_1428478232474_0001 running    in uber mode : false
15/04/08 03:31:26 INFO mapreduce.Job:  map 0% reduce 0%
15/04/08 03:31:26 INFO mapreduce.Job: Job job_1428478232474_0001 failed with     state FAILED due to: Application application_1428478232474_0001 failed 2 times due to Error launching appattempt_1428478232474_0001_000002. Got exception: java.net.ConnectException: Call From hadoop/127.0.0.1 to localhost:53245 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy31.startContainers(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:119)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:254)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
at org.apache.hadoop.ipc.Client.call(Client.java:1438)
    ... 9 more Failing the application.
15/04/08 03:31:26 INFO mapreduce.Job: Counters: 0

关于配置,确保namenode可以ssh到datanodes,反之亦然,没有提示password.I也解除IP6和修改/ etc / hosts文件:

About the configuration, sure that namenode can ssh to datanodes and vice versa without prompt password.I also dissabled IP6 and modified /etc/hosts file :


127.0.0.1 localhost hadoop

127.0.0.1 localhost hadoop

192.168.56.102 hadoop-nn

192.168.56.102 hadoop-nn

192.168.56.103 hadoop-dn1

192.168.56.103 hadoop-dn1

192.168.56.104 hadoop-dn2

192.168.56.104 hadoop-dn2

192.168.56.105 hadoop-dn3

192.168.56.105 hadoop-dn3

我不知道为什么mapreduced不能运行thought namenode和datanodes工作正常。我几乎停留在这里,你能帮我找到原因吗

I dont know why mapreduced can't run althought namenode and datanodes worked alright. I'm almost stucked at here, can you help me find the reason??


在hdfs-site.xml(namenode)中的配置:

Edit : Here config in hdfs-site.xml (namenode):

<property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///usr/local/hadoop/hadoop_stores/hdfs/namenode</value>
    <description>NameNode directory for namespace and transaction logs storage.</description>
</property>
<property>
    <name>dfs.replication</name>
    <value>3</value>
</property>
<property>
    <name>dfs.permissions</name>
    <value>false</value>
</property>
<property>
    <name>dfs.datanode.use.datanode.hostname</name>
    <value>false</value>
</property>
<property>
    <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
    <value>false</value>
</property>
<property>
     <name>dfs.namenode.http-address</name>
     <value>hadoop-nn:50070</value>
     <description>Your NameNode hostname for http access.</description>
</property>
<property>
     <name>dfs.namenode.secondary.http-address</name>
     <value>hadoop-nn:50090</value>
     <description>Your Secondary NameNode hostname for http access.</description>
</property>

在datanodes中:

In datanodes :

<property>
    <name>dfs.datanode.data.dir</name>
    <value>file:///usr/local/hadoop/hadoop_stores/hdfs/data/datanode</value>
    <description>DataNode directory</description>
</property>

<property>
    <name>dfs.replication</name>
    <value>3</value>
</property>
<property>
    <name>dfs.permissions</name>
    <value>false</value>
</property>
<property>
    <name>dfs.datanode.use.datanode.hostname</name>
    <value>false</value>
</property>
<property>
     <name>dfs.namenode.http-address</name>
     <value>hadoop-nn:50070</value>
     <description>Your NameNode hostname for http access.</description>
</property>
<property>
     <name>dfs.namenode.secondary.http-address</name>
     <value>hadoop-nn:50090</value>
     <description>Your Secondary NameNode hostname for http access.</description>





strong> hadoop fs -ls / user / hadoop / data


hadoop @ hadoop:〜/ DATA $ hadoop fs -ls / user / hadoop / data 15/04/09 00:23:27
找到2个项目

hadoop@hadoop:~/DATA$ hadoop fs -ls /user/hadoop/data 15/04/09 00:23:27 Found 2 items

-rw-r - r-- 3 hadoop supergroup 29 2015-04-09 00:22> /user/hadoop/data/sample.txt

-rw-r--r-- 3 hadoop supergroup 29 2015-04-09 00:22 >/user/hadoop/data/sample.txt

-rw-r - r-- 3 hadoop supergroup 27 2015- 04-09 00:22> /user/hadoop/data/sample1.txt

-rw-r--r-- 3 hadoop supergroup 27 2015-04-09 00:22 >/user/hadoop/data/sample1.txt

hadoop fs -ls / user / hadoop / output


ls:`/ user / hadoop / output':没有此类文件或目录

ls: `/user/hadoop/output': No such file or directory


推荐答案

找到解决方案!请参阅此纱线将数据节点ID /名称显示为localhost

Found solution!! see this post- yarn shows data nodes id/name as localhost

Call From localhost.localdomain/127.0.0.1 to localhost.localdomain:56148 failed on connection exception: java.net.ConnectException: Connection refused;

主机和从机都在/ etc / hostname中具有localhost.localdomain的主机名。

我将从站的主机名更改为slave1和slave2。这工作。
谢谢大家的时间。

Both master and slaves were having host names of localhost.localdomain in /etc/hostname.
I changed host names of slaves to slave1 and slave2. That worked. Thank you everyone for your time.

@kate确保namenode和datanode中的etc / hostname未设置为localhost。只需在终端中键入〜#hostname即可。您可以使用相同的命令设置新的主机名。

@kate make sure etc/hostname in namenode and datanodes are not set to localhost. Just type ~# hostname in terminal to see. You can set a new hostname by the same command.

我的主人和工作者或奴隶的/ etc / hosts看起来像这样 -

My master and workers or slaves' /etc/hosts looks like this-

127.0.0.1    localhost localhost.localdomain localhost4 localhost4.localdomain4
#127.0.1.1    localhost
192.168.111.72  master
192.168.111.65  worker1
192.168.111.66  worker2

worker1的主机名

hostname of worker1

hduser@worker1:/mnt/hdfs/datanode$ cat /etc/hostname 
worker1


$ b b

和worker2

and worker2

hduser@worker2:/usr/local/hadoop/logs$ cat /etc/hostname 
worker2

此外,也许你不想有hadoop主机名与环回接口。即

Also, probably you don't want to have "hadoop" hostname with loopback interface. i.e.

127.0.0.1 localhost hadoop 

https://wiki.apache.org中检查此点(1) / hadoop / ConnectionRefused

谢谢。

这篇关于ConnectException:在Hadoop中运行mapreduce时拒绝连接的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆