hadoop namenode端口正在使用中 [英] hadoop namenode port in use

查看:273
本文介绍了hadoop namenode端口正在使用中的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这实际上是一个备用HA名称节点。它的配置与主服务器相同,并且 hdfs namenode -bootstrapStandby 已成功运行。它开始在配置文件中定义的标准HTTP端口50070上:

This is actually a standby HA namenode. It was configured with the same settings as the primary and hdfs namenode -bootstrapStandby was successfully run. It begins coming up on the standard HTTP port 50070 as defined in the config file:

<property>
  <name>dfs.namenode.http-address.ha-hadoop.namenode2</name>
  <value>namenode2:50070</value>
</property>

启动开始OK然后点击:

The start up begins OK then hits:

15/02/02 08:06:17 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://hadoop1:50070
15/02/02 08:06:17 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
15/02/02 08:06:17 INFO http.HttpRequestLog: Http request log for http.requests.namenode is not defined
15/02/02 08:06:17 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
15/02/02 08:06:17 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
15/02/02 08:06:17 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
15/02/02 08:06:17 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
15/02/02 08:06:17 INFO http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
15/02/02 08:06:17 INFO http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
15/02/02 08:06:17 INFO http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: hadoop1:50070
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:890)
        at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:826)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:695)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:585)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:754)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:738)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1427)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1493)
Caused by: java.net.BindException: Cannot assign requested address
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:444)
        at sun.nio.ch.Net.bind(Net.java:436)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:885)
        ... 8 more
15/02/02 08:06:17 INFO impl.MetricsSystemImpl: Stopping NameNode metrics system...
15/02/02 08:06:17 INFO impl.MetricsSystemImpl: NameNode metrics system stopped.
15/02/02 08:06:17 INFO impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
15/02/02 08:06:17 FATAL namenode.NameNode: Failed to start namenode.
java.net.BindException: Port in use: hadoop1:50070
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:890)
        at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:826)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:695)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:585)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:754)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:738)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1427)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1493)
Caused by: java.net.BindException: Cannot assign requested address
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:444)
        at sun.nio.ch.Net.bind(Net.java:436)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:885)
        ... 8 more
15/02/02 08:06:17 INFO util.ExitUtil: Exiting with status 1
15/02/02 08:06:17 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop1.marketstudies.com/192.168.1.125
************************************************************/

我尝试通过设置来更改http地址端口:

I have tried changing the http-address port by setting:

<property>
  <name>dfs.namenode.http-address.local1-hadoop.hadoop1</name>
  <value>hadoop1:10070</value>
</property>

但是,只有新端口才能得到与上面相同的结果:

But then I get the same as above only with the new port:

15/02/02 08:16:51 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://hadoop1:10070
...
java.net.BindException: Port in use: hadoop1:10070
...
java.net.BindException: Port in use: hadoop1:10070

这是在主名称节点上使用相同的配置。

This is working with the same config on the primary namenode.

此问题似乎与我的问题类似,但答案没有帮助。我试着将 dfs.http.address 设置为其他东西,但它没有改变任何东西。我相信这是HA中用 dfs.namenode.http-address.ha-name.namenodename

This Question seems to be similar to my issue but the Answer didn't help. I tried setting dfs.http.address to other things and it didn't change anything. I belive this is a non-HA config option replaced in HA with dfs.namenode.http-address.ha-name.namenodename

从这里可以看出,实际上没有任何HTTP端口正在监听:

There is nothing actually listening to the http port as can be seen from here:

# netstat -anp |grep LIST
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      946/sshd
tcp        0      0 0.0.0.0:46712           0.0.0.0:*               LISTEN      2066/java
tcp        0      0 0.0.0.0:50010           0.0.0.0:*               LISTEN      28892/java
tcp        0      0 0.0.0.0:50075           0.0.0.0:*               LISTEN      28892/java
tcp        0      0 0.0.0.0:8480            0.0.0.0:*               LISTEN      1471/java
tcp        0      0 0.0.0.0:10050           0.0.0.0:*               LISTEN      2358/zabbix_agentd
tcp        0      0 0.0.0.0:50020           0.0.0.0:*               LISTEN      28892/java
tcp        0      0 0.0.0.0:8485            0.0.0.0:*               LISTEN      1471/java
tcp        0      0 0.0.0.0:8040            0.0.0.0:*               LISTEN      2066/java
tcp        0      0 0.0.0.0:8042            0.0.0.0:*               LISTEN      2066/java
tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      1020/mysqld
tcp6       0      0 :::22                   :::*                    LISTEN      946/sshd

尝试以root用户身份启动,看看是否有某种类型的perms问题用于侦听端口,但会产生相同的错误。 b $ b

Tried starting as root user to see if it's some kind of perms problem for listening to the port but that gives the same error.

推荐答案

发现问题。这来自IP地址发生更改的服务器的简短历史记录,但/ etc / hosts文件只是附加了新文件而不是替换它。我认为这让Hadoop启动困惑,因为它试图在一个不存在的界面上打开50070。错误是使用中的端口使得这有点令人困惑。

Found the issue. This came from a short history of this server where the IP address changed, but the /etc/hosts file just had the new one appended to it rather than replaced. I think this was confusing the Hadoop start up as it was trying to open 50070 on a non-existent interface. The error being "port in use" made this a little confusing.

这篇关于hadoop namenode端口正在使用中的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆