无Namenode或Datanode或Secondary NameNode停止 [英] No Namenode or Datanode or Secondary NameNode to stop
问题描述
我在Ubuntu 12.04中按照以下链接中的过程安装了Hadoop。
http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php
所有东西都安装成功,当我运行start-all.sh时,只有一些服务正在运行。
wanderer @ wanderer-Lenovo-IdeaPad-S510p :〜$ su - hduse
密码:
hduse @ wanderer-Lenovo-IdeaPad-S510p:〜$ cd / usr / local / hadoop / sbin
hduse @ wanderer-Lenovo-IdeaPad-S510p:/ usr / local / hadoop / sbin $ start-all.sh
此脚本已弃用。请使用start-dfs.sh和start-yarn.sh
在[localhost]
上启动namenodes hduse @ localhost的密码:
localhost:启动namenode,记录到/ usr / local / hadoop /日志/ hadoop-hduse-namenode-wanderer-Lenovo-IdeaPad-S510p.out
hduse @ localhost的密码:
localhost:启动datanode,记录到/ usr / local / hadoop / logs / hadoop-hduse- datanode-wanderer-Lenovo-IdeaPad-S510p.out
启动辅助名称节点[0.0.0.0]
hduse@0.0.0.0s密码:
0.0.0.0:启动secondarynamenode,记录到/ usr / local / hadoop / logs / hadoop-hduse-secondarynamenode-wanderer-Lenovo-IdeaPad-S510p.out
启动yarn daemons
启动resourcemanager,登录到/ usr / local / hadoop / logs / yarn- hduse-resourcemanager-wanderer-Lenovo-IdeaPad-S510p.out
hduse @ localhost的密码:
localhost:启动nodemanager,登录到/ usr / local / hadoop / logs / yarn-hduse-nodemanager-wanderer- Lenovo-IdeaPad-S510p.out
hduse @ wanderer-Lenovo-IdeaPad-S510p:/ usr / local / h adoop / sbin $ jps
7940 Jps
7545 ResourceManager
7885 NodeManager
一旦停止服务,运行脚本stop-all.sh
hduse @ wanderer-Lenovo-IdeaPad-S510p: / usr / local / hadoop / sbin $ stop-all.sh
此脚本已弃用。取而代之,使用stop-dfs.sh和stop-yarn.sh
在[localhost]
上停止namenode hduse @ localhost的密码:
localhost:no namenode停止
hduse @ localhost的密码:
localhost:没有datanode停止
停止辅助名称节点[0.0.0.0]
hduse@0.0.0.0s密码:
0.0.0.0:没有secondarynamenode停止
停止yarn守护进程
停止resourcemanager
hduse @ localhost的密码:
localhost:停止nodemanager
无代理服务器停止
我的配置文件
-
编辑bashrc文件
vi〜/ .bashrc
#HADOOP VARIABLES START
export JAVA_HOME = / usr / lib / jvm / java-8-oracle /
export HADOOP_INSTALL = / usr / local / hadoop $ b $ export PATH = $ PATH:$ HADOOP_INSTALL / bin $ b $ export PATH = $ PATH:$ HADOOP_INSTALL / sbin
export HADOOP_MAPRED_HOME = $ HADOOP_INSTALL
export HADOOP_COMMON_HOME = $ HADOOP_INSTALL $ b $ export HADOOP_HDFS_HOME = $ HADOOP_ INSTALL
export YARN_HOME = $ HADOOP_INSTALL $ b $ export HADOOP_COMMON_LIB_NATIVE_DIR = $ HADOOP_INSTALL / lib / native
export HADOOP_OPTS = - Djava.library.path = $ HADOOP_INSTALL / lib
export HADOOP_COMMON_LIB_NATIVE_DIR = $ HADOOP_HOME / lib / native
export HADOOP_OPTS = - Djava.library.path = $ HADOOP_HOME / lib
#HADOOP VARIABLES END
-
hdfs-site.xml
vi / usr / local /hadoop/etc/hadoop/hdfs-site.xml
<配置>
<属性>
< name> dfs.replication< / name>
<值> 1< /值>
< description>默认块复制。
创建文件时可以指定实际的复制次数。
如果在创建时未指定复制,则使用默认值。
< / description>
< / property>
<属性>
<名称> dfs.namenode.name.dir< /名称>
<值>文件:/ usr / local / hadoop_store / hdfs / namenode< / value>
< / property>
<属性>
< name> dfs.datanode.data.dir< / name>
<值>文件:/ usr / local / hadoop_store / hdfs / datanode< / value>
< / property>
< / configuration>
-
hadoop-env.sh
vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh
export JAVA_HOME = / usr / lib / jvm / java-8- oracle /
export HADOOP_CONF_DIR = $ {HADOOP_CONF_DIR: - / etc / hadoop}
for $ HADOOP_HOME / contrib / capacity-scheduler / *。jar;如果[$ HADOOP_CLASSPATH]执行
;然后
export HADOOP_CLASSPATH = $ HADOOP_CLASSPATH:$ f
else
export HADOOP_CLASSPATH = $ f
fi
完成
export HADOOP_OPTS =$ HADOOP_OPTS -Djava.net.preferIPv4Stack = true
export HADOOP_NAMENODE_OPTS = - Dhadoop.security.logger = $ {HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger = $ {HDFS_AUDIT_LOGGER:-INFO,NullAppender} $ HADOOP_NAMENODE_OPTS
export HADOOP_DATANODE_OPTS = - Dhadoop.security.logger = ERROR,RFAS $ HADOOP_DATANODE_OPTS
export HADOOP_SECONDARYNAMENODE_OPTS = - Dhadoop.security.logger = $ {HADOOP_SECURITY_LOGGER:-INFO, RFAS} -Dhdfs.audit.logger = $ {HDFS_AUDIT_LOGGER:-INFO,NullAppender} $ HADOOP_SECONDARYNAMENODE_OPTS
export HADOOP_NFS3_OPTS =$ HADOOP_NFS3_OPTS
export HADOOP_PORTMAP_OPTS = - Xmx512m $ HADOOP_PORTMAP_OPTS
#以下适用于多个命令(fs,dfs,fsck,distcp等)
export HADOOP_CLIENT_OPTS = - Xmx512m $ HADOOP_CLIENT_OPTS
export HADOOP_SECURE_DN_USER = $ {HADOOP_SECURE_DN_USE R}
export HADOOP_SECURE_DN_LOG_DIR = $ {HADOOP_LOG_DIR} / $ {HADOOP_HDFS_USER}
export HADOOP_PID_DIR = $ {HADOOP_PID_DIR}
export HADOOP_SECURE_DN_PID_DIR = $ {HADOOP_PID_DIR}
#表示这个hadoop实例的字符串。 $ USER默认。
export HADOOP_IDENT_STRING = $ USER
-
core-site.xml
vi /usr/local/hadoop/etc/hadoop/core-site.xml
< configuration>
<属性>
< name> hadoop.tmp.dir< / name>
< value> / app / hadoop / tmp< / value>
< description>其他临时目录的基础。< / description>
< / property>
<属性>
<名称> fs.default.name< /名称>
<值> hdfs:// localhost:54310< /值>
< description>默认文件系统的名称。一个URI,其
模式和权限决定了FileSystem的实现。
uri的方案决定配置属性(fs.SEMEME.impl)命名
FileSystem实现类。 uri的权限用于
确定文件系统的主机,端口等。< / description>
< / property>
< / configuration>
-
mapred-site.xml
vi /usr/local/hadoop/etc/hadoop/mapred-site.xml
< configuration>
<属性>
<名称> mapred.job.tracker< / name>
< value> localhost:54311< /值>
< description> MapReduce作业追踪器的主机和端口运行
at。如果是本地,那么作业将作为单个映射
在进程中运行并减少任务。
< / description>
< / property>
< / configuration>
$ javac -version
javac 1.8.0_66
<$> $ java -versionjava version1.8.0_66
Java™SE运行时环境(build 1.8.0_66-b17)
Java HotSpot (TM)64位服务器虚拟机(构建25.66-b17,混合模式)
我是Hadoop的新手,无法找到问题。我在哪里可以找到Jobtracker和NameNode的日志文件以便跟踪服务?
如果不是ssh问题,执行下一步操作:
-
从临时目录删除所有内容:
并格式化namenode服务器 bin / hadoop namenode -format 。
使用 bin / start-dfs.sh 启动namenode和datanode。
在命令行中键入 jps 以检查节点是否正在运行。 检查hduser是否有权写入hadoop_store / hdfs / namenode和datanode目录与 ls -ld目录
您可以通过 sudo chmod +777 / hadoop_store / hdfs / namenode /
I installed Hadoop in my Ubuntu 12.04 by following the procedure in the below link.
http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php
Everything is installed successfully and when I run the start-all.sh only some of the services are running.
wanderer@wanderer-Lenovo-IdeaPad-S510p:~$ su - hduse
Password:
hduse@wanderer-Lenovo-IdeaPad-S510p:~$ cd /usr/local/hadoop/sbin
hduse@wanderer-Lenovo-IdeaPad-S510p:/usr/local/hadoop/sbin$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
hduse@localhost's password:
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduse-namenode-wanderer-Lenovo-IdeaPad-S510p.out
hduse@localhost's password:
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduse-datanode-wanderer-Lenovo-IdeaPad-S510p.out
Starting secondary namenodes [0.0.0.0]
hduse@0.0.0.0's password:
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduse-secondarynamenode-wanderer-Lenovo-IdeaPad-S510p.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduse-resourcemanager-wanderer-Lenovo-IdeaPad-S510p.out
hduse@localhost's password:
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduse-nodemanager-wanderer-Lenovo-IdeaPad-S510p.out
hduse@wanderer-Lenovo-IdeaPad-S510p:/usr/local/hadoop/sbin$ jps
7940 Jps
7545 ResourceManager
7885 NodeManager
Once I stop the service by running the script stop-all.sh
hduse@wanderer-Lenovo-IdeaPad-S510p:/usr/local/hadoop/sbin$ stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [localhost]
hduse@localhost's password:
localhost: no namenode to stop
hduse@localhost's password:
localhost: no datanode to stop
Stopping secondary namenodes [0.0.0.0]
hduse@0.0.0.0's password:
0.0.0.0: no secondarynamenode to stop
stopping yarn daemons
stopping resourcemanager
hduse@localhost's password:
localhost: stopping nodemanager
no proxyserver to stop
My configuration files
Editing bashrc file
vi ~/.bashrc #HADOOP VARIABLES START export JAVA_HOME=/usr/lib/jvm/java-8-oracle/ export HADOOP_INSTALL=/usr/local/hadoop export PATH=$PATH:$HADOOP_INSTALL/bin export PATH=$PATH:$HADOOP_INSTALL/sbin export HADOOP_MAPRED_HOME=$HADOOP_INSTALL export HADOOP_COMMON_HOME=$HADOOP_INSTALL export HADOOP_HDFS_HOME=$HADOOP_INSTALL export YARN_HOME=$HADOOP_INSTALL export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib" export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib" #HADOOP VARIABLES END
hdfs-site.xml
vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>1</value> <description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/usr/local/hadoop_store/hdfs/namenode</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/usr/local/hadoop_store/hdfs/datanode</value> </property> </configuration>
hadoop-env.sh
vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh export JAVA_HOME=/usr/lib/jvm/java-8-oracle/ export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"} for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do if [ "$HADOOP_CLASSPATH" ]; then export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f else export HADOOP_CLASSPATH=$f fi done export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true" export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS" export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS" export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS" export HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS" export HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS" # The following applies to multiple commands (fs, dfs, fsck, distcp etc) export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS" export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER} export HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER} export HADOOP_PID_DIR=${HADOOP_PID_DIR} export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR} # A string representing this instance of hadoop. $USER by default. export HADOOP_IDENT_STRING=$USER
core-site.xml
vi /usr/local/hadoop/etc/hadoop/core-site.xml <configuration> <property> <name>hadoop.tmp.dir</name> <value>/app/hadoop/tmp</value> <description>A base for other temporary directories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://localhost:54310</value> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.</description> </property> </configuration>
mapred-site.xml
vi /usr/local/hadoop/etc/hadoop/mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>localhost:54311</value> <description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> </property> </configuration>
$ javac -version
javac 1.8.0_66
$ java -version
java version "1.8.0_66" Java(TM) SE Runtime Environment (build 1.8.0_66-b17) Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)
I am new to Hadoop and could not find the issue. Where can I find the log files for Jobtracker and NameNode in order to track the services?
If it is not an ssh issue, do the next:
Delete all contents from temporary directory: rm -Rf /app/hadoop/tmp and format the namenode server bin/hadoop namenode -format. Start the namenode and datanode with bin/start-dfs.sh. Type jps in command line to check whether nodes are running.
Check if hduser has rights to write the hadoop_store/hdfs/namenode and datanode directories with ls -ld directory
You can change the rights by sudo chmod +777 /hadoop_store/hdfs/namenode/
这篇关于无Namenode或Datanode或Secondary NameNode停止的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!