无法分配请求的地址 [英] Cannot assign requested address

查看:178
本文介绍了无法分配请求的地址的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

#192.168.0.105 UG-BLR-L030.example.com UG-BLR-L030 localhost

192.168.0.105 UG-BLR-L030 localhost.localdomain localhost

#以下内容线路对于支持IPv6的主机是可取的
:: 1 ip6-localhost ip6-loopback
fe00 :: 0 ip6-localnet
ff00 :: 0 ip6-mcastprefix
ff02 :: 1 ip6-allnodes
ff02 :: 2 ip6-allrouters

核心站点。 xml

 < configuration> 
<属性>
< name> hadoop.tmp.dir< / name>
<值> / usr / local / hadoop / hadoop-data< /值>
< description>其他临时目录的基础。< / description>
< / property>

<属性>
<名称> fs.default.name< /名称>
< value> hdfs:// UG-BLR-L030:54310< /值>
< description>默认文件系统的名称。一个URI,其
模式和权限决定了FileSystem的实现。
uri的方案决定配置属性(fs.SEMEME.impl)命名
FileSystem实现类。 uri的权限用于
确定文件系统的主机,端口等。< / description>
< / property>
< / configuration>

每当我尝试使用此命令启动hadoop start-dfs.sh 出现以下错误:

  2015-05-03 15:59:45,189信息org.apache .hadoop.hdfs.server.namenode.DecommissionManager:中断监视器
java.lang.InterruptedException:睡眠中断$ b $ java.util.Thread.sleep(本地方法)
at org.apache。 hadoop.hdfs.server.namenode.DecommissionManager $ Monitor.run(DecommissionManager.java:65)$ b $在java.lang.Thread.run(Thread.java:745)
2015-05-03 15: 59:45,195 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:java.net.BindException:与UG-BLR-L030 / 192.168.0.105绑定的问题:54310:无法在org处分配请求的地址
。 apache.hadoop.ipc.Server.bind(Server.java:227)
at org.apache.hadoop.ipc.Server $ Listener。< init>(Server.java:301)
at org .apache.hadoop.ipc.Server。< init>(Server.java:1483)
at org.apache.hadoop.ipc.RPC $ Server。< ini t>(RPC.java:545)
at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
at org.apache.hadoop.hdfs.server.namenode.NameNode .initialize(NameNode.java:294)
at org.apache.hadoop.hdfs.server.namenode.NameNode。< init>(NameNode.java:496)
at org.apache.hadoop。 hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
在org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
引起: java.net.BindException:无法分配请求的地址
在sun.nio.ch.Net.bind0(本地方法)
在sun.nio.ch.Net.bind(Net.java:463)
at sun.nio.ch.Net.bind(Net.java:455)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio。 ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:225)
... 8 more

2015-05-03 15:59:45,196信息org.apache.hadoop.hdfs.server.namenode.NameNode:SHUTDOWN_味精:
/ ******************************************* *****************
SHUTDOWN_MSG:在UG-BLR-L030 / 192.168.0.105上关闭NameNode
********** ************************************************** /

ifconfig

  eth0链路封装:以太网HWaddr f0:1f:af:4a:6b:fa 
UP BROADCAST MULTICAST MTU:1500度量标准:1
RX数据包:340842错误:0丢弃:0超限:0帧:0
TX包:197054错误:0丢弃:0超限:0载波:0
冲突:0 txqueuelen:1000
RX字节:410705701 (410.7 MB)TX字节:18456910(18.4 MB)
中断:20内存:f7e00000-f7e20000

链接封装:本地环回
inet addr:127.0.0.1掩码: 255.0.0.0
UP LOOPBACK RUNNING MTU:65536度量标准:1
RX包:1085723错误:0丢弃:0超限:0帧:0
TX包:1085723错误: 0下降:0超限:0载波:0
冲突:0 txqueuelen:0
RX字节:136152053(136.1 MB)TX字节:136152053(136.1 MB)

wlan0链接encap:Ethernet HWaddr 0c:8b:fd:1d:14:ba
inet addr:192.168.0.105 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST运行多播MTU:1500度量标准:1
RX包:873934错误:0丢弃:0超限:0帧:0
TX包:630943错误:0丢弃:0超限:0载波:0
冲突:0 txqueuelen:1000
RX字节:919721448(919.7 MB)TX字节:92919940(92.9 MB)

错误:

  ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:java.net。 BindException:绑定到UG-BLR-L030 / 192.168.0.105的问题:54310:无法分配请求的地址

为什么hadoop尝试连接到 UG-BLR-L030 / 192.168.0.105:54310 而不是o f UG-BLR-L030:54310 192.168.0.105:54310



 

127.0.0.1 UG-BLR-L030.example.com UG-BLR-L030本地主机
192.168.0.105 UG-BLR-L030.example.com UG-BLR-L030


cat /etc/hosts

127.0.0.1 localhost.localdomain localhost
#192.168.0.105 UG-BLR-L030.example.com UG-BLR-L030 localhost 

192.168.0.105 UG-BLR-L030 localhost.localdomain localhost

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

core-site.xml

<configuration>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/usr/local/hadoop/hadoop-data</value>
    <description>A base for other temporary directories.</description>
  </property>

  <property>
    <name>fs.default.name</name>
    <value>hdfs://UG-BLR-L030:54310</value>
    <description>The name of the default file system.  A URI whose
    scheme and authority determine the FileSystem implementation.  The
    uri's scheme determines the config property (fs.SCHEME.impl) naming
    the FileSystem implementation class.  The uri's authority is used to
    determine the host, port, etc. for a filesystem.</description>
  </property>
</configuration>

Whenever I try to start hadoop with this command start-dfs.sh I get the following error :

2015-05-03 15:59:45,189 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted Monitor
java.lang.InterruptedException: sleep interrupted
    at java.lang.Thread.sleep(Native Method)
    at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
    at java.lang.Thread.run(Thread.java:745)
2015-05-03 15:59:45,195 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: Problem binding to UG-BLR-L030/192.168.0.105:54310 : Cannot assign requested address
    at org.apache.hadoop.ipc.Server.bind(Server.java:227)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:294)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
Caused by: java.net.BindException: Cannot assign requested address
    at sun.nio.ch.Net.bind0(Native Method)
    at sun.nio.ch.Net.bind(Net.java:463)
    at sun.nio.ch.Net.bind(Net.java:455)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
    at org.apache.hadoop.ipc.Server.bind(Server.java:225)
    ... 8 more

2015-05-03 15:59:45,196 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at UG-BLR-L030/192.168.0.105
************************************************************/

ifconfig

eth0      Link encap:Ethernet  HWaddr f0:1f:af:4a:6b:fa  
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:340842 errors:0 dropped:0 overruns:0 frame:0
          TX packets:197054 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:410705701 (410.7 MB)  TX bytes:18456910 (18.4 MB)
          Interrupt:20 Memory:f7e00000-f7e20000 

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:1085723 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1085723 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:136152053 (136.1 MB)  TX bytes:136152053 (136.1 MB)

wlan0     Link encap:Ethernet  HWaddr 0c:8b:fd:1d:14:ba  
          inet addr:192.168.0.105  Bcast:192.168.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:873934 errors:0 dropped:0 overruns:0 frame:0
          TX packets:630943 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:919721448 (919.7 MB)  TX bytes:92919940 (92.9 MB)

Error:

ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: Problem binding to UG-BLR-L030/192.168.0.105:54310 : Cannot assign requested address

Why does hadoop try to connect to UG-BLR-L030/192.168.0.105:54310 instead of UG-BLR-L030:54310 or 192.168.0.105:54310

解决方案

I managed to get this to work by editing my hosts file to look like this :

127.0.0.1 UG-BLR-L030.example.com UG-BLR-L030 localhost
192.168.0.105 UG-BLR-L030.example.com UG-BLR-L030 

这篇关于无法分配请求的地址的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆