hadoop fs -mkdir在连接异常时失败 [英] hadoop fs -mkdir failed on connection exception

查看:189
本文介绍了hadoop fs -mkdir在连接异常时失败的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述


$ b


bin / hadoop fs - mkdir输入

我得到


mkdir :呼叫从h1 / 192.168.1.13到h1:9000连接失败异常:java.net.ConnectException:连接被拒绝;欲了解更多详情,请参阅: http://wiki.apache.org/hadoop/ConnectionRefused


这里是详细信息

core-site.xml

 < configuration> 
<属性>
< name> hadoop.tmp.dir< / name>
<值> / home / grid / tmp< /值>
< / property>
<属性>
<名称> fs.defaultFS< / name>
< value> hdfs:// h1:9000< / value>
< / property>
< / configuration>

mapred-site.xml

 <结构> 
<属性>
<名称> mapred.job.tracker< / name>
<值> h1:9001< /值>
< / property>

<属性>
<名称> mapred.map.tasks< / name>
<值> 20< /值>
< / property>
<属性>
<名称> mapred.reduce.tasks< / name>
<值> 4< /值>
< / property>
<属性>
< name> mapreduce.framework.name< / name>
<值>纱线< /值>
< / property>
<属性>
<名称> mapreduce.jobtracker.http.address< / name>
<值> h1:50030< /值>
< / property>
<属性>
<名称> mapreduce.jobhistory.address< / name>
<值> h1:10020< /值>
< / property>
<属性>
<名称> mapreduce.jobhistory.webapp.address< / name>
<值> h1:19888< /值>
< / property>

< / configuration>

hdfs-site.xml

 <结构> 

<属性>
<名称> dfs.http.address< / name>
<值> h1:50070< /值>
< / property>
<属性>
< name> dfs.namenode.rpc-address< / name>
<值> h1:9001< /值>
< / property>
<属性>
< name> dfs.namenode.secondary.http-address< / name>
<值> h1:50090< /值>
< / property>
<属性>
< name> dfs.datanode.data.dir< / name>
<值> / home / grid / data< /值>
< / property>
<属性>
< name> dfs.replication< / name>
<值> 2< /值>
< / property>
< / configuration>



/ etc / hosts

  127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 
:: 1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.13 h1
192.168.1.14 h2
192.168.1.15 h3

在hadoop namenode -format和start-all.sh

  1702 ResourceManager 
1374 DataNode
1802 NodeManager
2331 Jps
1276 NameNode
1558 SecondaryNameNode

发生问题

  [grid @ h1 hadoop-2.6.0] $ bin / hadoop fs -mkdir输入
15/05/13 16:37:57 WARN util.NativeCodeLoader:无法加载本机-hadoop库用于你的平台......在适用的地方使用内建java类
mkdir:从h1 / 192.168.1.13到h1:9000的调用失败连接例外:java.net.ConnectException:连接被拒绝;有关更多详细信息,请参阅:http://wiki.apache.org/hadoop/ConnectionRefused

问题?
$ b $

  2015-05-12 11:26:20,329 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:STARTUP_MSG:
/ ****************** ******************************************
STARTUP_MSG:正在启动DataNode
STARTUP_MSG:host = h1 / 192.168.1.13
STARTUP_MSG:args = []
STARTUP_MSG:version = 2.6.0


hadoop-grid-namenode-h1.log

  2015-05- 08 16:06:32,561 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:STARTUP_MSG:
/ ********************** **************************************
STARTUP_MSG:启动NameNode
STARTUP_MSG:host = h1 / 192.168.1.13
STARTUP_MSG:args = []
STARTUP_MSG:version = 2.6.0

为什么9000端口不工作?

  [grid @ h1〜 ] $ netstat -tnl | grep 9000 
[grid @ h1〜] $ netstat -tnl | grep 9001
tcp 0 0 192.168.1.13:9001 0.0.0.0:* LISTEN


解决方案

请启动dfs和yarn。

./ start-dfs.sh



。 /start-yarn.sh



现在尝试使用 bin / hadoop fs -mkdir input



当您在虚拟机中安装hadoop并将其关闭时,通常会出现此问题。当你关闭虚拟机时,dfs和纱线也停止。因此,每次重新启动虚拟机时都需要启动dfs和纱线。

I have been trying to set up and run Hadoop in the pseudo Distributed Mode.But when I type

bin/hadoop fs -mkdir input

I get

mkdir: Call From h1/192.168.1.13 to h1:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused

here is the details

core-site.xml

<configuration>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/grid/tmp</value>
  </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://h1:9000</value>
    </property>
</configuration>

mapred-site.xml

<configuration>
    <property>
        <name>mapred.job.tracker</name>
        <value>h1:9001</value>
    </property>

  <property>
    <name>mapred.map.tasks</name>
    <value>20</value>
  </property>
  <property>
    <name>mapred.reduce.tasks</name>
    <value>4</value>
  </property>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
  <property>
    <name>mapreduce.jobtracker.http.address</name>
    <value>h1:50030</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.address</name>
    <value>h1:10020</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>h1:19888</value>
  </property>

</configuration>

hdfs-site.xml

<configuration>

  <property>
    <name>dfs.http.address</name>
    <value>h1:50070</value>
  </property>
  <property>
    <name>dfs.namenode.rpc-address</name>
    <value>h1:9001</value>
  </property>
  <property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>h1:50090</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>/home/grid/data</value>
  </property>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
</configuration>

/etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.13 h1
192.168.1.14 h2
192.168.1.15 h3

After hadoop namenode -format and start-all.sh

1702 ResourceManager
1374 DataNode
1802 NodeManager
2331 Jps
1276 NameNode
1558 SecondaryNameNode

the problem occurs

[grid@h1 hadoop-2.6.0]$ bin/hadoop fs -mkdir input
15/05/13 16:37:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
mkdir: Call From h1/192.168.1.13 to h1:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

Where is the problems?

hadoop-grid-datanode-h1.log

2015-05-12 11:26:20,329 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = h1/192.168.1.13
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.6.0

hadoop-grid-namenode-h1.log

2015-05-08 16:06:32,561 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = h1/192.168.1.13
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.6.0

why the port 9000 does not work?

[grid@h1 ~]$ netstat -tnl |grep 9000
[grid@h1 ~]$ netstat -tnl |grep 9001
tcp        0      0 192.168.1.13:9001           0.0.0.0:*                   LISTEN     

解决方案

Please start dfs and yarn.

[hadoop@hadooplab sbin]$ ./start-dfs.sh

[hadoop@hadooplab sbin]$ ./start-yarn.sh

Now try using "bin/hadoop fs -mkdir input"

The issue usually comes when you install hadoop in a VM and then shut it down. When you shut down VM, dfs and yarn also stops. So you need to start dfs and yarn each time you restart the VM.

这篇关于hadoop fs -mkdir在连接异常时失败的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆