Hadoop - namenode未启动 [英] Hadoop - namenode is not starting up
问题描述
我试图以root用户身份运行hadoop,当Hadoop文件系统运行时,我执行了namenode格式命令 hadoop namenode -format
。
在此之后,当我尝试启动名称节点服务器时,它显示如下所示的错误
13/05/23 04:11:37错误namenode.FSNamesystem:FSNamesystem初始化失败。
java.io.IOException:NameNode未格式化。
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory。 java:100)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
我试图寻找任何解决方案,但找不到任何明确的解决方案。
任何人都可以提出建议吗?
谢谢。
酷,我找到了解决方案。
$ b
停止所有正在运行的服务器
1)stop-all.sh
code>
编辑文件 /usr/local/hadoop/conf/hdfs-site.xml
并添加下面的配置,如果其缺失
< property>
<名称> dfs.data.dir< /名称>
< value> / app / hadoop / tmp / dfs / name / data< / value>
< final> true< / final>
< / property>
<属性>
<名称> dfs.name.dir< /名称>
<值> / app / hadoop / tmp / dfs / name< /值>
< final> true< / final>
< / property>
启动HDFS和MapReduce守护进程
2)start-dfs.sh
3)start-mapred.sh
注意:您应该运行命令 bin / start-all.sh
如果直接命令没有运行。
I am trying to run hadoop as a root user, i executed namenode format command hadoop namenode -format
when the Hadoop file system is running.
After this, when i try to start the name node server, it shows error like below
13/05/23 04:11:37 ERROR namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
I tried to search for any solution, but cannot find any clear solution.
Can anyone suggest?
Thanks.
Cool, i have found the solution.
Stop all running server
1) stop-all.sh
Edit the file /usr/local/hadoop/conf/hdfs-site.xml
and add below configuration if its missing
<property>
<name>dfs.data.dir</name>
<value>/app/hadoop/tmp/dfs/name/data</value>
<final>true</final>
</property>
<property>
<name>dfs.name.dir</name>
<value>/app/hadoop/tmp/dfs/name</value>
<final>true</final>
</property>
Start both HDFS and MapReduce Daemons
2) start-dfs.sh
3) start-mapred.sh
Then now run the rest of the steps to run the map reduce sample given in this link
Note : You should be running the command bin/start-all.sh
if the direct command is not running.
这篇关于Hadoop - namenode未启动的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!