hadoop的datanode没有启动 [英] hadoop's datanode is not starting

查看:388
本文介绍了hadoop的datanode没有启动的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用ubuntu 14.04 LTS Java版本8和Hadoop 2.5.1进行安装。我遵循指南以安装所有组件。对不起,不使用迈克尔诺尔的。
现在我面对的问题是当我做start-dfs.sh时,我收到以下消息

  oroborus @ Saras-Dell-System-XPS-L502X:〜$ start-dfs.sh< br> 
14/11/12 16:12:33 WARN util.NativeCodeLoader:无法为您的平台加载native-hadoop库......在适用的情况下使用builtin-java类< br>
在[localhost]上启动namenodes< br>
localhost:启动namenode,记录到/usr/local/hadoop/logs/hadoop-oroborus-namenode-Saras-Dell-System-XPS-L502X.out<br>
localhost:启动datanode,记录到/usr/local/hadoop/logs/hadoop-oroborus-datanode-Saras-Dell-System-XPS-L502X.out<br>
启动辅助名称节点[0.0.0.0]< br>
0.0.0.0:启动secondarynamenode,记录到/usr/local/hadoop/logs/hadoop-oroborus-secondarynamenode-Saras-Dell-System-XPS-L502X.out<br>
14/11/12 16:12:48 WARN util.NativeCodeLoader:无法为您的平台加载native-hadoop库......在适用的情况下使用builtin-java类< br>

现在运行start-yarn.sh(这似乎工作正常)和jps我得到以下输出


  oroborus @ Saras-Dell-System-XPS-L502X:〜$ jps 
9090 NodeManager
5107 JobHistoryServer
8952 ResourceManager
12442 Jps
11981 NameNode



<理想的输出应该有datanode,但它不存在。搜索一下,发现有一点我发现这个错误是在日志中,所以这里是datanode的日志(只有错误部分,如果你需要更多的让我知道)



  2014-11-08 23:30:32,709 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:[TERM,HUP,INT]的注册UNIX信号处理程序
2014-11-08 23:30:33,132 WARN org.apache.hadoop.hdfs.server.datanode.DataNode:无效dfs.datanode.data.dir / usr / local / hadoop_store / hdfs / datanode:
EPERM:操作不允许
在org.apache.hadoop.io.nativeio.NativeIO $ POSIX.chmodImpl(本地方法)
在org.apache.hadoop.io.nativeio.NativeIO $ POSIX.chmod (NativeIO.java:226)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:642)
at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java :472)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:126)
at org.a pache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:142)
at org.apache.hadoop.hdfs.server.datanode.DataNode $ DataNodeDiskChecker.checkDir(DataNode.java:1866)
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1908)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1890)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1782)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java :1829)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2005)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main (DataNode.java:2029)
2014-11-08 23:30:33,134 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode:secureMain中的异常
java.io.IOException:All dfs.datanode.data.dir中的目录无效:/ usr / local / hadoop_store / hdfs / datanode /
在org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1917)



<现在我的疑问是如何使其有效。



帮助表示赞赏。

P.S.我尝试了很多论坛,所以没有人可以给我解决这个问题。

解决方案

首先删除hdfs文件夹中的所有内容:



< < name> hadoop.tmp.dir< / name>

  rm -rf / usr / local / hadoop_store 

确保dir拥有正确的所有者和权限/ usr / local / hadoop_store

  hduser @ localhost $ sudo chown hduser:hadoop -R / usr / local / hadoop_store 
hduser @ localhost $ sudo chmod 777 -R / usr / local / hadoop_store

格式化namenode:

  hduser @ localhost $ hadoop namenode -format 

再次启动所有流程


I am using ubuntu 14.04 LTS Java version 8 and Hadoop 2.5.1 for installation. I followed this guide to install all the components. Sorry for not using michael noll's. Now the problem that i face is when i do start-dfs.sh i get the following message

oroborus@Saras-Dell-System-XPS-L502X:~$ start-dfs.sh <br>
14/11/12 16:12:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable<br>
Starting namenodes on [localhost]<br>
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-oroborus-namenode-Saras-Dell-System-XPS-L502X.out<br>
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-oroborus-datanode-Saras-Dell-System-XPS-L502X.out<br>
Starting secondary namenodes [0.0.0.0]<br>
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-oroborus-secondarynamenode-Saras-Dell-System-XPS-L502X.out<br>
14/11/12 16:12:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable<br>

Now after running start-yarn.sh(which seems to work fine) and jps i get the following output

oroborus@Saras-Dell-System-XPS-L502X:~$ jps
9090 NodeManager
5107 JobHistoryServer
8952 ResourceManager
12442 Jps
11981 NameNode

The ideal output should have datanode in it, but it is not there. Googling and SOing a bit i found out the error is looged in the logs so here are the logs for datanode.(Only error part if you need more let me know)

    2014-11-08 23:30:32,709 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2014-11-08 23:30:33,132 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid dfs.datanode.data.dir /usr/local/hadoop_store/hdfs/datanode :
EPERM: Operation not permitted
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:226)
        at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:642)
        at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:472)
        at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:126)
        at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:142)
        at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1866)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1908)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1890)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1782)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1829)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2005)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2029)
2014-11-08 23:30:33,134 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
java.io.IOException: All directories in dfs.datanode.data.dir are invalid: "/usr/local/hadoop_store/hdfs/datanode/"
        at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1917)

Now my doubt is how to make it valid.

Help is appreciated.
P.S. I tried a lot of forums, SO post none of them could give me solution to this problem. Hence the question.

解决方案

First delete all contents from hdfs folder:

Value of <name>hadoop.tmp.dir</name>

rm -rf /usr/local/hadoop_store

Make sure that dir has right owner and permission /usr/local/hadoop_store

hduser@localhost$sudo chown hduser:hadoop -R /usr/local/hadoop_store
hduser@localhost$sudo chmod 777 -R /usr/local/hadoop_store

Format the namenode:

hduser@localhost$hadoop namenode -format

Start all processes again

这篇关于hadoop的datanode没有启动的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆