在端口9000上拒绝hadoop连接 [英] hadoop connection refused on port 9000

查看:259
本文介绍了在端口9000上拒绝hadoop连接的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想以伪分布模式设置hadoop-cluster进行开发。尝试启动hadoop群集因端口9000拒绝连接而失败。



这些是我的配置(非常标准):

site-core.xml:

 <?xml version =1.0 ?> 
<?xml-stylesheet type =text / xslhref =configuration.xsl?>
<配置>
<属性>
<名称> fs.default.name< /名称>
< value> hdfs:// localhost:9000< / value>
< / property>
<属性>
< name> hadoop.tmp.dir< / name>
< value>〜/ hacking / hd-data / tmp< /值>
< / property>
<属性>
<名称> fs.checkpoint.dir< /名称>
< value>〜/ hacking / hd-data / snn< /值>
< / property>
< / configuration>

hdfs-site.xml

 <?xml version =1.0?> 
<?xml-stylesheet type =text / xslhref =configuration.xsl?>
<配置>
<属性>
< name> dfs.replication< / name>
<值> 1< /值>
< / property>
<属性>
<名称> dfs.name.dir< /名称>
< value>〜/ hacking / hd-data / nn< /值>
< / property>
<属性>
<名称> dfs.data.dir< /名称>
< value>〜/ hacking / hd-data / dn< /值>
< / property>
<属性>
<名称> dfs.permissions.supergroup< /名称>
<值> hadoop< /值>
< / property>
< / configuration>

haddop-env.sh - 此处我将配置更改为IPv4模式(见最后一行):

 #在这里设置特定于Hadoop的环境变量。 

#唯一需要的环境变量是JAVA_HOME。所有其他人都是
#可选。在运行分布式配置时,最好在该文件中将
#设置为JAVA_HOME,以便在
#远程节点上正确定义它。

#使用的java实现。需要。
export JAVA_HOME = / usr / lib / jvm / java-7 -openjdk-amd64

额外的Java CLASSPATH元素。可选的。
#export HADOOP_CLASSPATH =

#使用的堆的最大数量,以MB为单位。默认值为1000.
#export HADOOP_HEAPSIZE = 2000

#额外的Java运行时选项。默认情况下为空。
#export HADOOP_OPTS = -server

指定特定选项后附加到HADOOP_OPTS
export HADOOP_NAMENODE_OPTS = - Dcom.sun.management.jmxremote $ HADOOP_NAMENODE_OPTS
export HADOOP_SECONDARYNAMENODE_OPTS = - Dcom.sun.management.jmxremote $ HADOOP_SECONDARYNAMENODE_OPTS
export HADOOP_DATANODE_OPTS = - Dcom.sun.management.jmxremote $ HADOOP_DATANODE_OPTS
export HADOOP_BALANCER_OPTS = - Dcom.sun.management.jmxremote $ HADOOP_BALANCER_OPTS
export HADOOP_JOBTRACKER_OPTS = - Dcom.sun.management.jmxremote $ HADOOP_JOBTRACKER_OPTS
#export HADOOP_TASKTRACKER_OPTS =
#以下内容适用于多个命令(fs,dfs,fsck,distcp等) )
#export HADOOP_CLIENT_OPTS

#额外的ssh选项。默认情况下为空。
#export HADOOP_SSH_OPTS = - o ConnectTimeout = 1 -o SendEnv = HADOOP_CONF_DIR

#存储日志文件的位置。默认$ HADOOP_HOME / logs。
#export HADOOP_LOG_DIR = $ {HADOOP_HOME} / logs

#命名远程从属主机的文件。 $ HADOOP_HOME / conf / slaves默认情况下。
#export HADOOP_SLAVES = $ {HADOOP_HOME} / conf / slaves

#host:路径,其中hadoop代码应该是rsync的。取消默认设置。
#export HADOOP_MASTER = master:/ home / $ USER / src / hadoop

#从属命令之间的睡眠时间。取消默认设置。这个
#在大型集群中很有用,例如,slave rsyncs可以
#以比主服务器更快的速度到达。
#export HADOOP_SLAVE_SLEEP = 0.1

#存储pid文件的目录。 / tmp默认情况下。
#export HADOOP_PID_DIR = / var / hadoop / pids

#表示这个hadoop实例的字符串。 $ USER默认。
#export HADOOP_IDENT_STRING = $ USER

#守护进程的调度优先级。看到'好人'。
#export HADOOP_NICENESS = 10

#禁用HADOOP的IPv6
export HADOOP_OPTS = -Djava.net.preferIPv4Stack = true

/ etc / hosts:

  127.0.0.1 localhost zaphod 

#对于支持IPv6的主机,以下几行是可取的:
:: 1 ip6-localhost ip6-loopback
fe00 :: 0 ip6-localnet
ff00 :: 0 ip6-mcastprefix
ff02 :: 1 ip6-allnodes
ff02 :: 2 ip6-allrouters

但在开始调用 ./ start-dfs.sh 之后的行在日志文件中:



hadoop-pschmidt-datanode-zaphod.log

  2013-08-19 21:21:59,430 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:STARTUP_MSG:
/ ****************** ******************************************
STARTUP_MSG:正在启动DataNode
STARTUP_MSG:host = zaphod / 127.0.1.1
STARTUP_MSG:args = []
STARTUP_MSG:version = 0.20.204.0
STARTUP_MSG:build = git://hrt8n35.cc1.ygridcore.net/关于分支分支-0.20-security-204 -r 65e258bf0813ac2b15bb4c954660eaf9e8fba141;由'hortonow'在8月25日星期四23:25:52 UTC 2011
****************************** ****************************** /
2013-08-19 21:22:03,950 INFO org.apache。 hadoop.metrics2.impl.MetricsConfig:从hadoop-metrics2.properties加载的属性
2013-08-19 21:22:04,052 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter:MBean for source MetricsSystem,sub =统计已注册。
2013-08-19 21:22:04,064信息org.apache.hadoop.metrics2.impl.MetricsSystemImpl:计划的10秒快照周期。
2013-08-19 21:22:04,065 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl:DataNode度量系统启动
2013-08-19 21:22:07,054 INFO org.apache。 hadoop.metrics2.impl.MetricsSourceAdapter:注册源ugi的MBean。
2013-08-19 21:22:07,060 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl:源名称ugi已存在!
2013-08-19 21:22:08,709信息org.apache.hadoop.ipc.Client:重试连接到服务器:localhost / 127.0.0.1:9000。已经尝试0次(s)。
2013-08-19 21:22:09,710信息org.apache.hadoop.ipc.Client:重试连接到服务器:localhost / 127.0.0.1:9000。已经尝试过1次。
2013-08-19 21:22:10,711 INFO org.apache.hadoop.ipc.Client:重试连接到服务器:localhost / 127.0.0.1:9000。已经尝试过2次。
2013-08-19 21:22:11,712信息org.apache.hadoop.ipc.Client:重试连接到服务器:localhost / 127.0.0.1:9000。已经尝试了3次。
2013-08-19 21:22:12,712信息org.apache.hadoop.ipc.Client:重试连接到服务器:localhost / 127.0.0.1:9000。已经尝试过4次。
2013-08-19 21:22:13,713 INFO org.apache.hadoop.ipc.Client:重试连接到服务器:localhost / 127.0.0.1:9000。已经尝试过5次。
2013-08-19 21:22:14,714 INFO org.apache.hadoop.ipc.Client:重试连接到服务器:localhost / 127.0.0.1:9000。已经尝试过6次。
2013-08-19 21:22:15,714 INFO org.apache.hadoop.ipc.Client:重试连接到服务器:localhost / 127.0.0.1:9000。已经尝试过7次。
2013-08-19 21:22:16,715 INFO org.apache.hadoop.ipc.Client:重试连接到服务器:localhost / 127.0.0.1:9000。已经尝试了8次。
2013-08-19 21:22:17,716 INFO org.apache.hadoop.ipc.Client:重试连接到服务器:localhost / 127.0.0.1:9000。已经尝试了9次。
2013-08-19 21:22:17,717信息org.apache.hadoop.ipc.RPC:服务器在localhost / 127.0.0.1:9000尚未提供,Zzzzz ...

$ b

hadoop-pschmidt-namenode-zaphod.log

  2013-08-19 21:21:59,443 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:STARTUP_MSG:
/ ******* ************************************************** ***
STARTUP_MSG:启动NameNode
STARTUP_MSG:host = zaphod / 127.0.1.1
STARTUP_MSG:args = []
STARTUP_MSG:version = 0.20.204.0
STARTUP_MSG :build = git://hrt8n35.cc1.ygridcore.net/关于分支分支-0.20-security-204 -r 65e258bf0813ac2b15bb4c954660eaf9e8fba141;由'hortonow'在8月25日星期四23:25:52 UTC 2011
****************************** ****************************** /
2013-08-19 21:22:03,950 INFO org.apache。 hadoop.metrics2.impl.MetricsConfig:从hadoop-metrics2.properties加载的属性
2013-08-19 21:22:04,052 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter:MBean for source MetricsSystem,sub =统计已注册。
2013-08-19 21:22:04,064信息org.apache.hadoop.metrics2.impl.MetricsSystemImpl:计划的10秒快照周期。
2013-08-19 21:22:04,064 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl:NameNode度量系统启动
2013-08-19 21:22:06,050 INFO org.apache。 hadoop.metrics2.impl.MetricsSourceAdapter:注册源ugi的MBean。
2013-08-19 21:22:06,056 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl:源名称ugi已存在!
2013-08-19 21:22:06,095 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter:用于注册源jvm的MBean。
2013-08-19 21:22:06,097 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter:已注册源NameNode的MBean。
2013-08-19 21:22:06,232信息org.apache.hadoop.hdfs.util.GSet:VM类型= 64位
2013-08-19 21:22:06信息组织。 apache.hadoop.hdfs.util.GSet:2%max memory = 17.77875 MB
2013-08-19 21:22:06,235 INFO org.apache.hadoop.hdfs.util.GSet:capacity = 2 ^ 21 = 2097152条目
2013-08-19 21:22:06,235 INFO org.apache.hadoop.hdfs.util.GSet:recommended = 2097152,actual = 2097152
2013-08-19 21:22:06,748 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:fsOwner = pschmidt
2013-08-19 21:22:06,748 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:supergroup = hadoop
2013-08-19 21:22:06,748信息org.apache.hadoop.hdfs.server.namenode.FSNamesystem:isPermissionEnabled = true
2013-08-19 21:22:06,754 INFO org.apache .hadoop.hdfs.server.namenode.FSNamesystem:dfs.block.invalidate.limit = 100
2013-08-19 21:22:06,768 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:isAccessTokenEnabled = false accessKeyUpdateInterval = 0分钟,accessTokenLife时间= 0分钟
2013-08-19 21:22:07,262信息org.apache.hadoop.hdfs.server.namenode.FSNamesystem:Registered FSNamesystemStateMBean和NameNodeMXBean
2013-08-19 21 :22:07,322 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:发生超过10次的缓存文件名
2013-08-19 21:22:07,326信息org.apache.hadoop.hdfs。 server.common.Storage:存储目录/home/pschmidt/hacking/hadoop-0.20.204.0/~/hacking/hd-data/nn不存在。
2013-08-19 21:22:07,329错误org.apache.hadoop.hdfs.server.namenode.FSNamesystem:FSNamesystem初始化失败。
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:目录/home/pschmidt/hacking/hadoop-0.20.204.0/~/hacking/hd-data/nn处于不一致状态:storage directory does不存在或不可访问。
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory。 java:97)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem。 < init>(FSNamesystem.java:353)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)
at org.apache.hadoop.hdfs .server.namenode.NameNode。< init>(NameNode.java:434)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)
2013-08-19 21:22:07,331错误org.apache.hadoop.hdfs.server.namenode.NameNode :org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:Directory /home/pschmidt/hacking/hadoop-0.20.204.0/~/hacking/hd-data/nn is in an inconsistent状态:存储目录不存在或不可访问。
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory。 java:97)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem。 < init>(FSNamesystem.java:353)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)
at org.apache.hadoop.hdfs .server.namenode.NameNode。< init>(NameNode.java:434)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)

2013-08-19 21:22:07,332 INFO org.apache.hadoop.hdfs.server .namenode.NameNode:SHUTDOWN_MSG:
/ ************************************** **********************
SHUTDOWN_MSG:在zaphod关闭NameNode / 127.0.1.1
******* ************************************************** *** /

重新格式化hdfs后,会显示以下输出:

  13/08/19 21:50:21信息namenode.NameNode:STARTUP_MSG:
/ ************ ****************************************
STARTUP_MSG:启动NameNode
STARTUP_MSG:host = zaphod / 127.0.0.1
STARTUP_MSG:args = [-format]
STARTUP_MSG:version = 0.20.204.0
STARTUP_MSG:build = git://hrt8n35.cc1.ygridcore.net/关于分支分支-0.20-security-204 -r 65e258bf0813ac2b15bb4c954660eaf9e8fba141;由'hortonow'在8月25日星期四23:25:52 UTC 2011
****************************** ****************************** /
在〜/ hacking / hd-data / nn中重新格式化文件系统? (Y或N)Y
13/08/19 21:50:26信息util.GSet:VM类型= 64位
13/08/19 21:50:26信息util.GSet: 2%max memory = 17.77875 MB
13/08/19 21:50:26 INFO util.GSet:capacity = 2 ^ 21 = 2097152条目
13/08/19 21:50:26 INFO util .GSet:recommended = 2097152,actual = 2097152
13/08/19 21:50:27 INFO namenode.FSNamesystem:fsOwner = pschmidt
13/08/19 21:50:27 INFO namenode.FSNamesystem :supergroup = hadoop
13/08/19 21:50:27信息namenode.FSNamesystem:isPermissionEnabled = true
13/08/19 21:50:27信息namenode.FSNamesystem:dfs.block.invalidate .limit = 100
13/08/19 21:50:27 INFO namenode.FSNamesystem:isAccessTokenEnabled = false accessKeyUpdateInterval = 0 min(s),accessTokenLifetime = 0 min(s)
13/08/19 21:50:27 INFO namenode.NameNode:发生超过10次的缓存文件名
13/08/19 21:50:27 INFO common.Storage:在0秒内保存的大小为110的图像文件。
13/08/19 21:50:28信息common.Storage:存储目录〜/ hacking / hd-data / nn已成功格式化。
13/08/19 21:50:28信息namenode.NameNode:SHUTDOWN_MSG:
/ ************************ ************************************
SHUTDOWN_MSG:在zaphod / 127.0关闭NameNode。 0.1
*********************************************使用 netstat -lpten | grep java

  tcp 0 0 0.0.0.0:50301 0.0.0.0:* LISTEN 1000 50995 9875 / java 
tcp 0 0 0.0.0.0:35471 0.0.0.0:* LISTEN 1000 51775 9639 / java
tcp6 0 0 ::: 2181 ::: * LISTEN 1000 20841 2659 / java
tcp6 0 0 ::: 36743 ::: * LISTEN 1000 20524 2659 / java

使用 netstat -lpten | grep 9000 不会返回任何内容,假设没有应用程序绑定到这个指定的端口。



我还能找什么来让我的hdfs启动并运行。不要犹豫,要求进一步的日志和配置文件。



在此先感谢。 解决方案



  

使用绝对路径来确保hadoop用户有权访问此目录。 <性>
<名称> dfs.data.dir< /名称>
< value>〜/ hacking / hd-data / dn< /值>
< / property>

也要确保你的格式如

 #hadoop namenode -format 


I want to setup a hadoop-cluster in pseudo-distributed mode for development. Trying to start the hadoop cluster fails due to refused connection on port 9000.

These are my configs (pretty standard):

site-core.xml:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>~/hacking/hd-data/tmp</value>
  </property>
  <property>
    <name>fs.checkpoint.dir</name>
    <value>~/hacking/hd-data/snn</value>
  </property>
</configuration>

hdfs-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
  <property>
    <name>dfs.name.dir</name>
    <value>~/hacking/hd-data/nn</value>
  </property>
  <property>
    <name>dfs.data.dir</name>
    <value>~/hacking/hd-data/dn</value>
  </property>
  <property>
    <name>dfs.permissions.supergroup</name>
    <value>hadoop</value>
  </property>
</configuration>

haddop-env.sh - here I changed the config to IPv4 mode only (see last line):

# Set Hadoop-specific environment variables here.

# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.

# The java implementation to use.  Required.
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64

# Extra Java CLASSPATH elements.  Optional.
# export HADOOP_CLASSPATH=

# The maximum amount of heap to use, in MB. Default is 1000.
# export HADOOP_HEAPSIZE=2000

# Extra Java runtime options.  Empty by default.
# export HADOOP_OPTS=-server

# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS"
export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS"
export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS"
export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS"
# export HADOOP_TASKTRACKER_OPTS=
# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
# export HADOOP_CLIENT_OPTS

# Extra ssh options.  Empty by default.
# export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR"

# Where log files are stored.  $HADOOP_HOME/logs by default.
# export HADOOP_LOG_DIR=${HADOOP_HOME}/logs

# File naming remote slave hosts.  $HADOOP_HOME/conf/slaves by default.
# export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves

# host:path where hadoop code should be rsync'd from.  Unset by default.
# export HADOOP_MASTER=master:/home/$USER/src/hadoop

# Seconds to sleep between slave commands.  Unset by default.  This
# can be useful in large clusters, where, e.g., slave rsyncs can
# otherwise arrive faster than the master can service them.
# export HADOOP_SLAVE_SLEEP=0.1

# The directory where pid files are stored. /tmp by default.
# export HADOOP_PID_DIR=/var/hadoop/pids

# A string representing this instance of hadoop. $USER by default.
# export HADOOP_IDENT_STRING=$USER

# The scheduling priority for daemon processes.  See 'man nice'.
# export HADOOP_NICENESS=10

# Disabling IPv6 for HADOOP
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true

/etc/hosts:

127.0.0.1   localhost   zaphod

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

But at the beginning after calling ./start-dfs.sh following lines are in the log files:

hadoop-pschmidt-datanode-zaphod.log

2013-08-19 21:21:59,430 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = zaphod/127.0.1.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.204.0
STARTUP_MSG:   build = git://hrt8n35.cc1.ygridcore.net/ on branch branch-0.20-security-204 -r 65e258bf0813ac2b15bb4c954660eaf9e8fba141; compiled by 'hortonow' on Thu Aug 25 23:25:52 UTC 2011
************************************************************/
2013-08-19 21:22:03,950 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-08-19 21:22:04,052 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-08-19 21:22:04,064 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-08-19 21:22:04,065 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2013-08-19 21:22:07,054 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-08-19 21:22:07,060 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-08-19 21:22:08,709 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s).
2013-08-19 21:22:09,710 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s).
2013-08-19 21:22:10,711 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 2 time(s).
2013-08-19 21:22:11,712 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s).
2013-08-19 21:22:12,712 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 4 time(s).
2013-08-19 21:22:13,713 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s).
2013-08-19 21:22:14,714 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 6 time(s).
2013-08-19 21:22:15,714 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 7 time(s).
2013-08-19 21:22:16,715 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 8 time(s).
2013-08-19 21:22:17,716 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 9 time(s).
2013-08-19 21:22:17,717 INFO org.apache.hadoop.ipc.RPC: Server at localhost/127.0.0.1:9000 not available yet, Zzzzz...

hadoop-pschmidt-namenode-zaphod.log

2013-08-19 21:21:59,443 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = zaphod/127.0.1.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.204.0
STARTUP_MSG:   build = git://hrt8n35.cc1.ygridcore.net/ on branch branch-0.20-security-204 -r 65e258bf0813ac2b15bb4c954660eaf9e8fba141; compiled by 'hortonow' on Thu Aug 25 23:25:52 UTC 2011
************************************************************/
2013-08-19 21:22:03,950 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-08-19 21:22:04,052 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-08-19 21:22:04,064 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-08-19 21:22:04,064 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-08-19 21:22:06,050 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-08-19 21:22:06,056 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-08-19 21:22:06,095 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-08-19 21:22:06,097 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-08-19 21:22:06,232 INFO org.apache.hadoop.hdfs.util.GSet: VM type       = 64-bit
2013-08-19 21:22:06,234 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 17.77875 MB
2013-08-19 21:22:06,235 INFO org.apache.hadoop.hdfs.util.GSet: capacity      = 2^21 = 2097152 entries
2013-08-19 21:22:06,235 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
2013-08-19 21:22:06,748 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=pschmidt
2013-08-19 21:22:06,748 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=hadoop
2013-08-19 21:22:06,748 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-08-19 21:22:06,754 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-08-19 21:22:06,768 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-08-19 21:22:07,262 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-08-19 21:22:07,322 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times 
2013-08-19 21:22:07,326 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /home/pschmidt/hacking/hadoop-0.20.204.0/~/hacking/hd-data/nn does not exist.
2013-08-19 21:22:07,329 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/pschmidt/hacking/hadoop-0.20.204.0/~/hacking/hd-data/nn is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)
2013-08-19 21:22:07,331 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/pschmidt/hacking/hadoop-0.20.204.0/~/hacking/hd-data/nn is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)

2013-08-19 21:22:07,332 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at zaphod/127.0.1.1
************************************************************/

After reformatting the hdfs following output is displayed:

13/08/19 21:50:21 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = zaphod/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.204.0
STARTUP_MSG:   build = git://hrt8n35.cc1.ygridcore.net/ on branch branch-0.20-security-204 -r 65e258bf0813ac2b15bb4c954660eaf9e8fba141; compiled by 'hortonow' on Thu Aug 25 23:25:52 UTC 2011
************************************************************/
Re-format filesystem in ~/hacking/hd-data/nn ? (Y or N) Y
13/08/19 21:50:26 INFO util.GSet: VM type       = 64-bit
13/08/19 21:50:26 INFO util.GSet: 2% max memory = 17.77875 MB
13/08/19 21:50:26 INFO util.GSet: capacity      = 2^21 = 2097152 entries
13/08/19 21:50:26 INFO util.GSet: recommended=2097152, actual=2097152
13/08/19 21:50:27 INFO namenode.FSNamesystem: fsOwner=pschmidt
13/08/19 21:50:27 INFO namenode.FSNamesystem: supergroup=hadoop
13/08/19 21:50:27 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/08/19 21:50:27 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
13/08/19 21:50:27 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
13/08/19 21:50:27 INFO namenode.NameNode: Caching file names occuring more than 10 times 
13/08/19 21:50:27 INFO common.Storage: Image file of size 110 saved in 0 seconds.
13/08/19 21:50:28 INFO common.Storage: Storage directory ~/hacking/hd-data/nn has been successfully formatted.
13/08/19 21:50:28 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at zaphod/127.0.0.1
************************************************************/

Using netstat -lpten | grep java :

tcp        0      0 0.0.0.0:50301           0.0.0.0:*               LISTEN      1000       50995       9875/java       
tcp        0      0 0.0.0.0:35471           0.0.0.0:*               LISTEN      1000       51775       9639/java       
tcp6       0      0 :::2181                 :::*                    LISTEN      1000       20841       2659/java       
tcp6       0      0 :::36743                :::*                    LISTEN      1000       20524       2659/java 

Using netstat -lpten | grep 9000 returns nothing, assuming that there is no application bound to this designated port after all.

What else can I look for to get my hdfs up and running. Don't hesitate to ask for further logs and config files.

Thanks in advance.

解决方案

Use absolute path for this and make sure the hadoop user has permissions to access this directory:-

<property>
    <name>dfs.data.dir</name>
    <value>~/hacking/hd-data/dn</value>
  </property>

also make sure you format this path like

# hadoop namenode -format

这篇关于在端口9000上拒绝hadoop连接的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆