无法使用start-dfs.sh启动守护进程 [英] Unable to start daemons using start-dfs.sh

查看:693
本文介绍了无法使用start-dfs.sh启动守护进程的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们使用cloudera的cdh4-0.0发行版。

 > start-dfs.sh 
开启namenodes on [localhost]
hduser @ localhost的密码:
localhost:mkdir:无法创建目录`/ hduser':权限被拒绝
localhost:chown:无法访问`/ hduser / hduser':没有这样的文件或目录
localhost:启动namenode,记录到/hduser/hduser/hadoop-hduser-namenode-canberra.out
localhost:/home/hduser/work/software/cloudera/hadoop-2.0.0- cdh4.0.0 / sbin / hadoop-daemon.sh:第150行:/hduser/hduser/hadoop-hduser-namenode-canberra.out:没有这样的文件或目录
localhost:head:无法打开`/ hduser / hduser /hadoop-hduser-namenode-canberra.out'阅读:没有这样的文件或目录


解决尝试设置覆盖默认值 HADOOP_LOG_DIR 看起来像使用tarballs吗?

在你的 etc / hadoop / hadoop-env.sh config fi中的位置如下所示:

  export HADOOP_LOG_DIR = / path / to / hadoop / extract / logs / 

然后重试 sbin / start-dfs.sh ,它应该可以工作。

在打包环境中,通过相同的 HADOOP_LOG_DIR env-var,所以它们不会遇到同样的问题。



如果您使用的是包,请不要使用这些脚本,取而代之的是:

 服务hadoop-hdfs-namenode start 
服务hadoop-hdfs-datanode start
服务hadoop-hdfs-secondarynamenode start


We are using cdh4-0.0 distribution from cloudera. We are unable to start the daemons using the below command.

>start-dfs.sh
Starting namenodes on [localhost]
hduser@localhost's password: 
localhost: mkdir: cannot create directory `/hduser': Permission denied
localhost: chown: cannot access `/hduser/hduser': No such file or directory
localhost: starting namenode, logging to /hduser/hduser/hadoop-hduser-namenode-canberra.out
localhost: /home/hduser/work/software/cloudera/hadoop-2.0.0-cdh4.0.0/sbin/hadoop-daemon.sh: line 150: /hduser/hduser/hadoop-hduser-namenode-canberra.out: No such file or directory
localhost: head: cannot open `/hduser/hduser/hadoop-hduser-namenode-canberra.out' for reading: No such file or directory

解决方案

Looks like you're using tarballs?

Try to set an override the default HADOOP_LOG_DIR location in your etc/hadoop/hadoop-env.sh config file like so:

export HADOOP_LOG_DIR=/path/to/hadoop/extract/logs/

And then retry sbin/start-dfs.sh, and it should work.

In packaged environments, the start-stop scripts are tuned to provide a unique location for each type of service, via the same HADOOP_LOG_DIR env-var, so they do not have the same issue you're seeing.

If you are using packages instead, don't use these scripts and instead just do:

service hadoop-hdfs-namenode start
service hadoop-hdfs-datanode start
service hadoop-hdfs-secondarynamenode start

这篇关于无法使用start-dfs.sh启动守护进程的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆