在伪模式下没有namenode错误 [英] no namenode error in pseudo-mode

查看:151
本文介绍了在伪模式下没有namenode错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是hadoop的新手,处于学习阶段。
根据Hadoop Definitve指南,我设置了我的hadoop伪分布模式,并且一切工作正常。我甚至能够执行第3章中的所有例子。今天,当我重新启动我的UNIX,并尝试运行start-dfs.sh,然后尝试 localhost:50070 ...它是显示错误,当我尝试停止dfs(stop-dfs.sh)时,它说没有namenode停止。我一直在搜索这个问题,但没有结果。此外,当我再次格式化我的namenode时...一切都开始正常工作,我可以连接到 localhost:50070 甚至复制hdfs中的文件和目录,但只要我重新启动我的linux并尝试连接到hdfs就出现同样的问题。



以下是错误日志:

  *************************** ********************************* / 
2011-06-22 15:45:55,249信息组织.apache.hadoop.hdfs.server.namenode.NameNode:STARTUP_MSG:
/ ****************************** ******************************
STARTUP_MSG:启动NameNode
STARTUP_MSG:host = ubuntu / 127.0。 1.1
STARTUP_MSG:args = []
STARTUP_MSG:version = 0.20.203.0
STARTUP_MSG:build = http://svn.apache.org/repos/asf/hadoop/common/branches/ branch-0.20-security-203 -r 1099333;由'oom'在星期三5月4日07:57:50 PDT 2011编辑
****************************** ****************************** /
2011-06-22 15:45:56,383 INFO org.apache。 hadoop.metrics2.impl.MetricsConfig:从hadoop-metrics2.properties加载的属性
2011-06-22 15:45:56,455 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter:MBean for source MetricsSystem,sub =统计已注册。
2011-06-22 15:45:56,494信息org.apache.hadoop.metrics2.impl.MetricsSystemImpl:计划的10秒快照期。
2011-06-22 15:45:56,494信息org.apache.hadoop.metrics2.impl.MetricsSystemImpl:NameNode度量系统启动
INFO org.apache。2011-06-22 15:45:57,007 INFO org.apache。 hadoop.metrics2.impl.MetricsSourceAdapter:注册源ugi的MBean。
2011-06-22 15:45:57,031 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl:源名称ugi已经存在!
2011-06-22 15:45:57,059 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter:注册源jvm的MBean。
2011-06-22 15:45:57,070 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter:注册源NameNode的MBean。
2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet:VM type = 32-bit
2011-06-22 15:45:57,374 INFO org。 apache.hadoop.hdfs.util.GSet:2%max memory = 19.33375 MB
2011-06-22 15:45:57,374信息org.apache.hadoop.hdfs.util.GSet:capacity = 2 ^ 22 = 4194304条目
2011-06-22 15:45:57,374信息org.apache.hadoop.hdfs.util.GSet:recommended = 4194304,actual = 4194304
2011-06-22 15:45:57,854 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:fsOwner = anshu
2011-06-22 15:45:57,854信息org.apache.hadoop.hdfs.server.namenode.FSNamesystem:supergroup = supergroup
2011-06-22 15:45:57,854信息org.apache.hadoop.hdfs.server.namenode.FSNamesystem:isPermissionEnabled = true
2011-06-22 15:45:57,868信息org.apache .hadoop.hdfs.server.namenode.FSNamesystem:dfs.block.invalidate.limit = 100
2011-06-22 15:45:57,869 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:isAccessTokenEnabled = false accessKeyUpdateInterval = 0分钟,accessTokenLif etime = 0分钟(s)
2011-06-22 15:45:58,769 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:Registered FSNamesystemStateMBean和NameNodeMXBean
2011-06-22 15 :45:58,809 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:发生超过10次的缓存文件名
** 2011-06-22 15:45:58,825 INFO org.apache.hadoop。 hdfs.server.common.Storage:存储目录/ tmp / hadoop-anshu / dfs / name不存在。
2011-06-22 15:45:58,827错误org.apache.hadoop.hdfs.server.namenode.FSNamesystem:FSNamesystem初始化失败。
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:目录/ tmp / hadoop-anshu / dfs / name处于不一致状态:存储目录不存在或不可访问。 org.apache.h上的
** adoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage (FSDirectory.java:97)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)
at org.apache.hadoop.hdfs.server.namenode .FSNamesystem。< init>(FSNamesystem.java:353)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)
at org.apache。 hadoop.hdfs.server.namenode.NameNode。< init>(NameNode.java:434)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)
在org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)
2011-06-22 15:45:58,828错误org.apache.hadoop.hdfs.server。 namenode.NameNode:org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:目录/ tmp / hadoop-anshu / dfs / name处于不一致的状态:存储目录不存在或不可访问。
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory。 java:97)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem。 < init>(FSNamesystem.java:353)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)
at org.apache.hadoop.hdfs .server.namenode.NameNode。< init>(NameNode.java:434)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)

2011-06-22 15:45:58,829 INFO org.apache.hadoop.hdfs.server .namenode.NameNode:SHUTDOWN_MSG:
/ ************************************** **********************
SHUTDOWN_MSG:关闭在Ubuntu的NameNode / 127.0.1.1
******** ************************************************** ** /

任何帮助表示感谢
感谢您

解决方案

这里是kicker:

lockquote
org.apache.hadoop .hdfs.server.common.InconsistentFSStateException:
目录/ tmp / hadoop-anshu / dfs / name
处于不一致的状态:storage
目录不存在或不是
访问。


我一直有类似的问题。我用stop-all.sh关闭了hadoop。我想这是愚蠢的,我认为这将妥善保存在我的HDFS数据。



但据我所知,似乎是适当的代码块在hadoop-daemon.sh脚本中,情况并非如此 - 它只是杀死进程:

 (stop)

if [-f $ pid];那么
如果杀死-0`cat $ pid`> / dev / null 2>& 1;然后
echo停止$ command
kill`cat $ pid`
else
echo no $ command停止
fi
else
echo no $命令停止
fi

你看看它抱怨的目录是否存在?我检查过,但我没有,虽然有一个(空的!)数据文件夹在那里,我想数据可能曾经住过。



所以我的猜测是,我们需要做的是配置Hadoop,这样我们的namenode和datanode就不会存储在tmp目录中。有一些可能性,操作系统正在维护和摆脱这些文件。要么是你不再关心它们的hadoop数字,因为如果你这样做了,你不会将它们留在tmp目录中,并且你不会在map-reduce作业中重新启动你的机器。我真的不认为这个应该发生(我的意思是,这不是我设计的东西),但它似乎是一个很好的猜测。



因此,根据此网站 http://wiki.datameer。 com / display / DAS11 / Hadoop +配置+文件+模板
i编辑我的conf / hdfs-site.xml文件以指向以下路径(显然,根据需要制作自己的目录):

 <属性> 
<名称> dfs.name.dir< /名称>
< value> / hadoopstorage / name /< / value>
< / property>

<属性>
<名称> dfs.data.dir< /名称>
< value> / hadoopstorage / data /< / value>
< / property>

是否这样做,格式化了新的名称节点(不幸的是,在这种情况下数据丢失似乎是不可避免的)启动hadoop与shell脚本,重新启动机器,我的文件仍然存在...



YMMV ...希望这适合你!我在OS X上,但我不认为你应该有不同的结果。



J


I'm new to hadoop and is in learning phase. As per Hadoop Definitve guide, i have set up my hadoop in pseudo distributed mode and everything was working fine. I was even able to execute all the examples from chapter 3 yesterday. Today, when i rebooted my unix and tried to run start-dfs.sh and then tried localhost:50070... it is showing error and when i try to stop dfs (stop-dfs.sh) it says no namenode to stop. I have been googling the issue but no result. Also, when i format my namenode again...everything starts working fine and i'm able to connect to the localhost:50070 and even replicate files and directories in hdfs but as soon as i restart my linux and try to connect to hdfs the same problem comes up.

Below is the error log:

************************************************************/
2011-06-22 15:45:55,249 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = ubuntu/127.0.1.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.203.0
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May  4 07:57:50 PDT 2011
************************************************************/
2011-06-22 15:45:56,383 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2011-06-22 15:45:56,455 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2011-06-22 15:45:56,494 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2011-06-22 15:45:56,494 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2011-06-22 15:45:57,007 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2011-06-22 15:45:57,031 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2011-06-22 15:45:57,059 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2011-06-22 15:45:57,070 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: VM type       = 32-bit
2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB
2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: capacity      = 2^22 = 4194304 entries
2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304
2011-06-22 15:45:57,854 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=anshu
2011-06-22 15:45:57,854 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2011-06-22 15:45:57,854 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2011-06-22 15:45:57,868 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2011-06-22 15:45:57,869 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2011-06-22 15:45:58,769 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2011-06-22 15:45:58,809 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
**2011-06-22 15:45:58,825 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop-anshu/dfs/name does not exist.
2011-06-22 15:45:58,827 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anshu/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.h**adoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)
2011-06-22 15:45:58,828 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anshu/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)

2011-06-22 15:45:58,829 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
************************************************************/

Any help is appreciated Thank-you

解决方案

here is the kicker:

org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anshu/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.

i'd been having similar issues. i used stop-all.sh to shut down hadoop. i guess it was foolish of me to think this would properly save the data in my HDFS.

but as far as i can tell from what appears to be the appropriate code chunk in the hadoop-daemon.sh script, this is not the case - it just kills the processes:

(stop)

    if [ -f $pid ]; then
      if kill -0 `cat $pid` > /dev/null 2>&1; then
        echo stopping $command
        kill `cat $pid`
      else
        echo no $command to stop
      fi
    else
      echo no $command to stop
    fi

did you look to see if the directory it's complaining about existed? i checked and mine did not, although there was an (empty!) data folder in there here I imagine data might have once lived.

so my guess was that what we need to do is configure Hadoop such that our namenode and datanode are NOT stored in a tmp directory. there is some possibility that the OS is doing maintenance and getting rid of these files. either that hadoop figures you don't care about them anymore because you wouldn't have left them in a tmp directory if you did, and you wouldn't be restarting your machine in the middle of a map-reduce job. I don't really think this should happen (i mean, that's not how i would design things) but it seemed like a good guess.

so, based on this site http://wiki.datameer.com/display/DAS11/Hadoop+configuration+file+templates i edited my conf/hdfs-site.xml file to point to the following paths (obviously, make your own directories as you see fit):

<property>
  <name>dfs.name.dir</name>
  <value>/hadoopstorage/name/</value>
</property>

<property>
  <name>dfs.data.dir</name>
  <value>/hadoopstorage/data/</value>
</property>

Did this, formatted the new namenode (sadly, data loss seems inevitable in this situation), stopped and started hadoop with the shell scripts, restarted the machine, and my files were still there...

YMMV...hope this works for you! i'm on OS X but i don't think you should have dissimilar results.

J

这篇关于在伪模式下没有namenode错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆