namenode,datanode不是使用jps列出的 [英] namenode, datanode not list by using jps

查看:204
本文介绍了namenode,datanode不是使用jps列出的的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

环境:ubuntu 14.04,hadoop 2.6



在输入 start-all.sh jps DataNode 不会在终端上列出

<$ p $ j
9529 ResourceManager
9652 NodeManager
9060 NameNode
10108 Jps
9384 SecondaryNameNode

根据这个答案: Datanode进程不在Hadoop中运行



我尝试了最佳解决方案


  • bin / stop-all.sh(或2.x系列中的stop-dfs.sh和stop-yarn.sh)

  • rm -Rf / app / tmp / hadoop-your-username / *

  • bin / hadoop namenode -format(或2.x系列中的hdfs)



然而, ,现在我得到这个:

 > jps 
20369 ResourceManager
26032 Jps
20204 SecondaryNameNode
20710 NodeMan ager

正如您所看到的,即使是 NameNode 缺失,请帮助我。



DataNode日志 https://gist.github.com/fifiteen82726/b561bbd9cdcb9bf36032



NmaeNode日志 https://gist.github.com/fifiteen82726 / 02dcf095b5a23c1570b0


$ b

mapred-site.xml

 <?xml version =1.0?> 
<?xml-stylesheet type =text / xslhref =configuration.xsl?>
<! -
根据Apache许可证2.0版许可(许可证);
除非符合许可证,否则您不得使用此文件。
您可以在

处获得许可证副本http://www.apache.org/licenses/LICENSE-2.0

除非适用法律要求或以书面形式同意,根据许可证分发的软件
是按原样基础,
分发,没有任何形式的明示或暗示保证或条件。
请参阅许可证以了解许可证下特定语言的管理权限和
限制。请参阅随附的LICENSE文件。
- >

<! - 将特定于站点的属性覆盖到此文件中。 - >

<配置>
<属性>
< name> mapreduce.framework.name< / name>
<值>纱线< /值>
< / property>

< / configuration>



UPDATE



  coda @ ubuntu:/ usr / local / hadoop / sbin $ start-all.sh 
此脚本已弃用。相反,使用start-dfs.sh和start-yarn.sh
15/04/30 01:07:25 WARN util.NativeCodeLoader:无法为您的平台加载native-hadoop库...使用内建-java类
在[localhost]上启动namenodes
coda @ localhost的密码:
localhost:chown:更改'/ usr / local / hadoop / logs'的所有权:不允许操作
localhost:mv:无法将'/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.4'移动到'/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.5 ':权限被拒绝
localhost:mv:无法将'/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.3'移动到'/ usr / local / hadoop / logs / hadoop-coda' -namenode-ubuntu.out.4':权限被拒绝
localhost:mv:无法将'/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.2'移动到'/ usr / local /hadoop/logs/hadoop-coda-namenode-ubuntu.out.3':Permission denied
localhost:mv:can move'/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out。 1 'to'/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.2':Permission denied
localhost:mv:can move'/ usr / local / hadoop / logs / hadoop-coda -namenode-ubuntu.out'到'/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.1':Permission denied
localhost:启动namenode,记录到/ usr / local / hadoop /logs/hadoop-coda-namenode-ubuntu.out
localhost:/usr/local/hadoop/sbin/hadoop-daemon.sh:第159行:/ usr / local / hadoop / logs / hadoop -coda-namenode -ubuntu.out:Permission denied
localhost:ulimit -a用户coda
localhost:核心文件大小(blocks,-c)0
localhost:data seg size(kbytes,-d)无限
本地主机:调度优先级(-e)0
本地主机:文件大小(块,-f)无限
本地主机:未决信号(-i)3877
本地主机:最大锁定内存(千字节,-l)64
localhost:最大内存大小(千字节,-m)无限制
localhost:打开文件(-n)1024
localhost:管道大小(512字节,-p)8
localhost:/usr/local/hadoop/sbin/hadoop-daemon.sh:第177行:/ usr / local / hadoop / logs / hadoop -coda-namenode-ubuntu.out:权限被拒绝
localhost:/usr/local/hadoop/sbin/hadoop-daemon.sh:第178行:/ usr / local / hadoop / logs / hadoop -coda-namenode-ubuntu.out:Permission denied
coda @ localhost的密码:
localhost:chown:更改'/ usr / local / hadoop / logs'的所有权:不允许操作
localhost :mv:无法将'/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.4'移动到'/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.5' :权限被拒绝
localhost:mv:无法将'/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.3'移动到'/ usr / local / hadoop / logs / hadoop-coda- datanode-ubuntu.out.4':权限被拒绝
localhost:mv:无法将'/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.2'移动到'/ usr / local /的hadoop /日志/ Hadoop的结尾-数据节点-ubun tu.out.3':Permission denied
localhost:mv:无法将'/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.1'移动到'/ usr / local / hadoop /日志/ hadoop -coda-datanode-ubuntu.out.2:权限被拒绝
localhost:mv:无法将'/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out'移动到'/ usr / local / hadoop / logs / hadoop -coda-datanode-ubuntu.out.1':Permission denied
localhost:启动datanode,记录到/ usr / local / hadoop / logs / hadoop -coda-datanode-ubuntu .out
localhost:/usr/local/hadoop/sbin/hadoop-daemon.sh:第159行:/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out:权限被拒绝
localhost:ulimit -a用于用户co​​da
localhost:核心文件大小(块,-c)0
localhost:数据段大小(千字节,-d)无限
本地主机:调度优先级-e)0
localhost:文件大小(块,-f)无限
localhost:挂起信号(-i)3877
localhost:最大锁定内存(千字节,-l)64
localhost:最大内存大小(千字节,-m)无限
localhost:打开文件(-n)1024
localhost:管道大小(512字节,-p )8
localhost:/usr/local/hadoop/sbin/hadoop-daemon.sh:第177行:/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out:权限被拒绝
localhost:/usr/local/hadoop/sbin/hadoop-daemon.sh:第178行:/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out:权限被拒绝
启动辅助名称节点[ 0.0.0.0]
coda@0.0.0.0s密码:
0.0.0.0:chown:更改'/ usr / local / hadoop / logs'的所有权:不允许操作
0.0。 0.0:作为进程20204运行的secondarynamenode首先停止。
15/04/30 01:07:51 WARN util.NativeCodeLoader:无法为您的平台加载native-hadoop库......在适用的情况下使用builtin-java类
启动yarn daemons
chown:更改'/ usr / local / hadoop / logs'的所有权:操作不允许
resourcemanager作为进程20369运行。首先停止它。
coda @ localhost的密码:
localhost:chown:更改'/ usr / local / hadoop / logs'的所有权:不允许操作
localhost:nodemanager作为进程20710运行。
coda @ ubuntu:/ usr / local / hadoop / sbin $ jps
20369 ResourceManager
2934 Jps
20204 SecondaryNameNode
20710 NodeManager

UPDATE



  hadoop @ ubuntu: / usr / local / hadoop / sbin $ $ HADOOP_HOME ./start-all.sh 
此脚本已弃用。而是使用start-dfs.sh和start-yarn.sh
15/05/03 09:32:23 WARN util.NativeCodeLoader:无法为您的平台加载native-hadoop库......使用内建-java类(如果适用)
在[localhost]上启动namenode
hadoop @ localhost的密码:
localhost:启动namenode,记录到/usr/local/hadoop/logs/hadoop-hadoop-namenode-ubuntu.out
hadoop @ localhost的密码:
localhost:datanode作为进程28584运行。首先停止它。
启动辅助名称节点[0.0.0.0]
hadoop@0.0.0.0的密码:
0.0.0.0:启动secondarynamenode,记录到/ usr / local / hadoop / logs / hadoop-hadoop -secondarynamenode-ubuntu.out
15/05/03 09:32:47 WARN util.NativeCodeLoader:无法为你的平台加载native-hadoop库......在适用的地方使用内建java类
起始yarn daemons
启动resourcemanager,记录到/usr/local/hadoop/logs/yarn-hadoop-resourcemanager-ubuntu.out
hadoop @ localhost的密码:
localhost:启动nodemanager,记录到/ usr / local / hadoop / logs / yarn-hadoop-nodemanager-ubuntu.out
hadoop @ ubuntu:/ usr / local / hadoop / sbin $ jps
6842 Jps
28584 DataNode


解决方案


FATAL org.apache.hadoop。 hdfs.server.datanode.DataNode:secureMain中的异常
java.io.IOException:dfs.datanode.data.dir中的所有目录均无效:/ usr / local / hadoop_store / hdfs / datano de

这个错误可能是由于 / usr / local / hadoop_store / hdfs / datanode / 文件夹。


FATAL org.apache.hadoop.hdfs.server.namenode。 NameNode:无法启动namenode。
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:目录 / usr / local / hadoop_store / hdfs / namenode 处于不一致状态:存储目录不存在或不存在

这个错误可能是由于 / usr / local / hadoop_store / hdfs / namenode的权限错误文件夹或它不存在。要解决这个问题,请遵循以下选项:

选项I: 没有文件夹 / usr / local / hadoop_store / hdfs ,然后创建并授予该文件夹的权限,如下所示:

  sudo mkdir / usr / local / hadoop_store / hdfs 
sudo chown -R hadoopuser:hadoopgroup / usr / local / hadoop_store / hdfs
sudo chmod -R 755 / usr / local / hadoop_store / hdfs

更改 hadoopuser hadoopgroup 分别添加到您的hadoop用户名和hadoop组名。现在,尝试启动hadoop进程。如果问题仍然存在,请尝试使用选项2.



选项二:

删除 / usr / local / hadoop_store / hdfs 文件夹中的内容:

  sudo rm -r / usr / local / hadoop_store / hdfs / * 

更改文件夹权限:

  sudo chmod -R 755 / usr / local / hadoop_store / hdfs 

现在,启动hadoop进程。它应该有效。


注意:如果错误仍然存​​在,请发布新日志。

b

更新:

如果您还没有创建hadoop用户和组,做到如下:

  sudo addgroup hadoop 
sudo adduser --ingroup hadoop hadoop

现在,更改 / usr / local / hadoop 的所有权以及 / usr / local / hadoop_store

  sudo chown -R hadoop: hadoop / usr / local / hadoop 
sudo chown -R hadoop:hadoop / usr / local / hadoop_store

将您的用户改为hadoop:

  su  -  hadoop 

输入您的hadoop用户密码。现在你的终端应该是这样的:

hadoop @ ubuntu:$



现在,键入:

$ HADOOP_HOME / bin / start-all.sh





sh /usr/local/hadoop/bin/start-all.sh


Environment: ubuntu 14.04, hadoop 2.6

After I type the start-all.sh and jps, DataNode doesn't list on the terminal

>jps
9529 ResourceManager
9652 NodeManager
9060 NameNode
10108 Jps
9384 SecondaryNameNode

according to this answer : Datanode process not running in Hadoop

I try its best solution

  • bin/stop-all.sh (or stop-dfs.sh and stop-yarn.sh in the 2.x serie)
  • rm -Rf /app/tmp/hadoop-your-username/*
  • bin/hadoop namenode -format (or hdfs in the 2.x series)

However, now I get this:

>jps
20369 ResourceManager
26032 Jps
20204 SecondaryNameNode
20710 NodeManager

As you can see, even the NameNode is missing, please help me.

DataNode logs : https://gist.github.com/fifiteen82726/b561bbd9cdcb9bf36032

NmaeNode logs : https://gist.github.com/fifiteen82726/02dcf095b5a23c1570b0

mapred-site.xml :

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>mapreduce.framework.name</name>
 <value>yarn</value>
</property>

</configuration>

UPDATE

coda@ubuntu:/usr/local/hadoop/sbin$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/04/30 01:07:25 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
coda@localhost's password: 
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.4’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.5’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.3’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.4’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.2’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.3’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.1’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.2’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.1’: Permission denied
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
localhost: ulimit -a for user coda
localhost: core file size          (blocks, -c) 0
localhost: data seg size           (kbytes, -d) unlimited
localhost: scheduling priority             (-e) 0
localhost: file size               (blocks, -f) unlimited
localhost: pending signals                 (-i) 3877
localhost: max locked memory       (kbytes, -l) 64
localhost: max memory size         (kbytes, -m) unlimited
localhost: open files                      (-n) 1024
localhost: pipe size            (512 bytes, -p) 8
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
coda@localhost's password: 
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.4’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.5’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.3’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.4’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.2’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.3’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.1’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.2’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.1’: Permission denied
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
localhost: ulimit -a for user coda
localhost: core file size          (blocks, -c) 0
localhost: data seg size           (kbytes, -d) unlimited
localhost: scheduling priority             (-e) 0
localhost: file size               (blocks, -f) unlimited
localhost: pending signals                 (-i) 3877
localhost: max locked memory       (kbytes, -l) 64
localhost: max memory size         (kbytes, -m) unlimited
localhost: open files                      (-n) 1024
localhost: pipe size            (512 bytes, -p) 8
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
Starting secondary namenodes [0.0.0.0]
coda@0.0.0.0's password: 
0.0.0.0: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
0.0.0.0: secondarynamenode running as process 20204. Stop it first.
15/04/30 01:07:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
resourcemanager running as process 20369. Stop it first.
coda@localhost's password: 
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: nodemanager running as process 20710. Stop it first.
coda@ubuntu:/usr/local/hadoop/sbin$ jps
20369 ResourceManager
2934 Jps
20204 SecondaryNameNode
20710 NodeManager

UPDATE

hadoop@ubuntu:/usr/local/hadoop/sbin$ $HADOOP_HOME ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/05/03 09:32:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
hadoop@localhost's password: 
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-ubuntu.out
hadoop@localhost's password: 
localhost: datanode running as process 28584. Stop it first.
Starting secondary namenodes [0.0.0.0]
hadoop@0.0.0.0's password: 
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-ubuntu.out
15/05/03 09:32:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-resourcemanager-ubuntu.out
hadoop@localhost's password: 
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-ubuntu.out
hadoop@ubuntu:/usr/local/hadoop/sbin$ jps
6842 Jps
28584 DataNode

解决方案

FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.io.IOException: All directories in dfs.datanode.data.dir are invalid: "/usr/local/hadoop_store/hdfs/datanode/"

This error may be due to wrong permissions for /usr/local/hadoop_store/hdfs/datanode/ folder.

FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /usr/local/hadoop_store/hdfs/namenode is in an inconsistent state: storage directory does not exist or is not accessible.

This error may be due to wrong permissions for /usr/local/hadoop_store/hdfs/namenode folder or it does not exist. To rectify this problem follow these options:

OPTION I:

If you don't have the folder /usr/local/hadoop_store/hdfs, then create and give permission to the folder as follows:

sudo mkdir /usr/local/hadoop_store/hdfs
sudo chown -R hadoopuser:hadoopgroup /usr/local/hadoop_store/hdfs
sudo chmod -R 755 /usr/local/hadoop_store/hdfs

Change hadoopuser and hadoopgroup to your hadoop username and hadoop groupname respectively. Now, try to start the hadoop processes. If the problem still persists, try option 2.

OPTION II:

Remove the contents of /usr/local/hadoop_store/hdfs folder:

sudo rm -r /usr/local/hadoop_store/hdfs/*

Change folder permission:

sudo chmod -R 755 /usr/local/hadoop_store/hdfs

Now, start the hadoop processes. It should work.

NOTE: Post the new logs if error persists.

UPDATE:

In case you haven't created the hadoop user and group, do it as follows:

sudo addgroup hadoop
sudo adduser --ingroup hadoop hadoop

Now, change ownership of /usr/local/hadoop and /usr/local/hadoop_store:

sudo chown -R hadoop:hadoop /usr/local/hadoop
sudo chown -R hadoop:hadoop /usr/local/hadoop_store

Change your user to hadoop:

su - hadoop

Enter your hadoop user password. Now your terminal should be like:

hadoop@ubuntu:$

Now, type:

$HADOOP_HOME/bin/start-all.sh

or

sh /usr/local/hadoop/bin/start-all.sh

这篇关于namenode,datanode不是使用jps列出的的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆