名称节点:java.net.BindException [英] NameNode: java.net.BindException

查看:34
本文介绍了名称节点:java.net.BindException的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

大家好,我遇到了一个非常奇怪的问题.我正在通过从我的机器访问它在另一个 VM 上安装 HBase 和 hadoop.现在我已经正确安装了 hadoop,然后伊朗它./start-all.sh,我看到了所有进程都在完美运行.所以我做了 jps,我看到了
工作追踪器
任务追踪器
名称节点
二级名称节点
数据节点

一切运行良好.现在当我设置 hbase 然后启动 hadoop 和 Hbase 时,我看到 namenode 没有运行,并且在日志中(来自 namenode 日志文件)我收到了这个异常

hi folks i am stucked in very strange problem.I am installing HBase and hadoop on another VM by accessing it from my machine.Now i have properly installed hadoop and then iran it ./start-all.sh and i see that all processes are running perfectly.So i do jps and i saw that
jobtracker
tasktracker
namenode
secondrynamenode
datanode

everything is running good.Now when I setup hbase and then started hadoop and Hbase , I saw that namenode is not running and in logs (from namenode log file) I got this exception

java.lang.InterruptedException: sleep interrupted
    at java.lang.Thread.sleep(Native Method)
    at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
    at java.lang.Thread.run(Thread.java:662)
2012-05-19 08:46:07,493 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0 
2012-05-19 08:46:07,516 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: Problem binding to localhost/23.21.195.24:54310 : Cannot assign requested address
    at org.apache.hadoop.ipc.Server.bind(Server.java:227)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:294)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:497)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1268)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1277)
Caused by: java.net.BindException: Cannot assign requested address
    at sun.nio.ch.Net.bind(Native Method)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:225)
    ... 8 more

2012-05-19 08:46:07,516 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 



我检查了端口并一次又一次地修改了所有 conf 文件,但没有找到解决方案.如果有人有想法,请指导我-
非常感谢



i checked ports and revise all conf files again and again but didn't find the solution. Please guide me if anyone have an idea-
Thnaks alot

推荐答案

根据您的评论,您很可能与 hosts 文件有关.

Based on your comment, you're probably is most probably related to the hosts file.

首先,您应该取消注释 127.0.0.1 localhost 条目,这是一个基本条目.

Firstly you should uncomment the 127.0.0.1 localhost entry, this is a fundamental entry.

其次,您是否设置了 hadoop 和 hbase 以与外部可访问服务一起运行 - 我对 hbase 并不太感兴趣,但是对于 hadoop,这些服务需要绑定到非本地地址才能进行外部访问,所以您的主人$HADOOP_HOME/conf 中的 slaves 文件需要命名实际机器名称(或 IP 地址,如果您没有 DNS 服务器).您的任何配置文件都不应提及 localhost,而应使用主机名或 IP 地址.

Secondly, Have you set up hadoop and hbase to run with external accessible services - i'm not too up on hbase, but for hadoop, the services need to be bound to non-localhost addresses for external access, so your masters and slaves files in $HADOOP_HOME/conf need to name the actual machine names (or IP addresses if you don't have a DNS server). None of your configuration files should mention localhost, and should use either the host names or IP addresses.

这篇关于名称节点:java.net.BindException的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆