Hadoop错误无法启动--all.sh [英] Hadoop error can not start-all.sh

查看:104
本文介绍了Hadoop错误无法启动--all.sh的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经在我的笔记本电脑单一模式下安装了一个hadoop。
info:Ubuntu 12.10,jdk 1.7 oracle,从.deb文件安装hadoop。
位置:
/ etc / hadoop
/ usr / share / hadoop



我在/ usr / share / hadoop / templates / conf / core-site.xml我添加了2个属性

 < property> 
< name> hadoop.tmp.dir< / name>
< value> / app / hadoop / tmp< / value>
< description>其他临时目录的基础。< / description>
< / property>

<属性>
<名称> fs.default.name< /名称>
< value> hdfs:// localhost:9000< / value>
< description>默认文件系统的名称。一个URI,其
模式和权限决定了FileSystem的实现。
uri的方案决定配置属性(fs.SEMEME.impl)命名
FileSystem实现类。 uri的权限用于
确定文件系统的主机,端口等。< / description>
< / property>

在hdfs-site.xml中

 <性> 
< name> dfs.replication< / name>
<值> 1< /值>
< description>默认块复制。
创建文件时可以指定实际的复制次数。
如果在创建时未指定复制,则使用默认值。
< / description>
< / property>

在mapred-site.xml中

 < property> 
<名称> mapred.job.tracker< / name>
< value> localhost:9001< /值>
< description> MapReduce作业追踪器的主机和端口运行
at。如果是本地,那么作业将作为单个映射
在进程中运行并减少任务。
< / description>
< / property>

当我用命令
启动hduser @ sepdau:〜$ start-all.sh

 开始namenode,记录到/var/log/hadoop/hduser/hadoop-hduser-namenode-sepdau.com.out 
localhost:启动datanode,记录到/var/log/hadoop/hduser/hadoop-hduser-datanode-sepdau.com.out
localhost:启动secondarynamenode,记录到/ var / log / hadoop / hduser / hadoop -hduser-secondarynamenode-sepdau.com.out
启动jobtracker,登录到/var/log/hadoop/hduser/hadoop-hduser-jobtracker-sepdau.com.out
localhost:启动tasktracker,登录到/var/log/hadoop/hduser/hadoop-hduser-tasktracker-sepdau.com.out

但当我通过jps查看过程时

  hduser @ sepdau:〜$ jps 
13725 Jps

more

  root @ sepdau :/ home / sepdau#netstat -plten | grep java 
tcp6 0 0 ::: 8080 ::: * LISTEN 117 9953 1316 / java
tcp6 0 0 ::: 53976 ::: * LISTEN 117 16755 1316 / java
tcp6 0 0 127.0.0.1:8700 ::: * LISTEN 1000 786271 8323 / java
tcp6 0 0 ::: 59012 ::: * LISTEN 117 16756 1316 / java

when stop-all.sh

  hduser @ sepdau :〜$ stop-all.sh 
没有jobtracker停止
localhost:没有tasktracker停止
没有namenode停止
localhost:没有datanode停止
localhost:没有secondarynamenode在我的hosts文件中停止



  hduser @ sepdau:〜$ cat / etc / hosts 
$ b 127.0.0.1 localhost
127.0.1.1 sepdau.com



#以下对于支持IPv6的主机来说,翼线是理想的
:: 1 ip6-localhost ip6-loopback $ b $ fe00 :: 0 ip6-localnet
ff00 :: 0 ip6-mcastprefix
ff02 :: 1 ip6-allnodes
ff02 :: 2 ip6-allrouters

文件slave:localhost master: localhost



这里有一些日志

  hduser @ sepdau:/ home / sepdau $ start-all.sh 
mkdir:无法创建目录`/ var / run / hadoop':权限被拒绝
启动namenode,记录到/ var / log / hadoop / hduser / hadoop-hduser- namenode-sepdau.com.out
/usr/sbin/hadoop-daemon.sh:第136行:/var/run/hadoop/hadoop-hduser-namenode.pid:没有这样的文件或目录
localhost :mkdir:无法创建目录`/ var / run / hadoop':权限被拒绝
localhost:启动datanode,记录到/var/log/hadoop/hduser/hadoop-hduser-datanode-sepdau.com.out
localhost:/usr/sbin/hadoop-daemon.sh:第136行:/var/run/hadoop/hadoop-hduser-datanode.pid:没有这样的文件或目录
localhost:mkdi r:无法创建目录`/ var / run / hadoop':权限被拒绝
localhost:启动secondarynamenode,记录到/var/log/hadoop/hduser/hadoop-hduser-secondarynamenode-sepdau.com.out
localhost:/usr/sbin/hadoop-daemon.sh:第136行:/var/run/hadoop/hadoop-hduser-secondarynamenode.pid:没有这样的文件或目录
mkdir:无法创建目录`/ var /运行/ hadoop':权限被拒绝
启动jobtracker,登录到/var/log/hadoop/hduser/hadoop-hduser-jobtracker-sepdau.com.out
/usr/sbin/hadoop-daemon.sh :第136行:/var/run/hadoop/hadoop-hduser-jobtracker.pid:没有这样的文件或目录
localhost:mkdir:无法创建目录`/ var / run / hadoop':权限被拒绝
localhost:启动tasktracker,登录到/var/log/hadoop/hduser/hadoop-hduser-tasktracker-sepdau.com.out
localhost:/usr/sbin/hadoop-daemon.sh:第136行:/ var /运行/ hadoop / hadoop-hduser-tasktracker.pid:没有这样的文件或目录

我使用root用户,但我t有同样的问题

我在这里错了。如何通过hadoop插件连接到eclipse。
感谢您的提前

解决方案

尝试添加

 <性> 
<名称> dfs.name.dir< /名称>
<值> / home / abhinav / hdfs< /值>
< / property>

到hdfs-site.xml,并确保它存在



我为此写了一个小教程。看看这是否有帮助 http://blog.abhinavmathur.net/2013 /01/experience-with-setting-multinode.html


I've setup a hadoop in my laptop single mode. info: Ubuntu 12.10, jdk 1.7 oracle, install hadoop from .deb file. location: /etc/hadoop /usr/share/hadoop

I have config in /usr/share/hadoop/templates/conf/core-site.xml I add 2 properties

    <property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:9000</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>

in hdfs-site.xml

<property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>

in mapred-site.xml

    <property>
  <name>mapred.job.tracker</name>
  <value>localhost:9001</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>

when I start with command hduser@sepdau:~$ start-all.sh

starting namenode, logging to /var/log/hadoop/hduser/hadoop-hduser-namenode-sepdau.com.out
localhost: starting datanode, logging to /var/log/hadoop/hduser/hadoop-hduser-datanode-sepdau.com.out
localhost: starting secondarynamenode, logging to /var/log/hadoop/hduser/hadoop-hduser-secondarynamenode-sepdau.com.out
starting jobtracker, logging to /var/log/hadoop/hduser/hadoop-hduser-jobtracker-sepdau.com.out
localhost: starting tasktracker, logging to /var/log/hadoop/hduser/hadoop-hduser-tasktracker-sepdau.com.out

but when I view process by jps

hduser@sepdau:~$ jps
13725 Jps

more

 root@sepdau:/home/sepdau# netstat -plten | grep java
tcp6       0      0 :::8080                 :::*                    LISTEN      117        9953        1316/java       
tcp6       0      0 :::53976                :::*                    LISTEN      117        16755       1316/java       
tcp6       0      0 127.0.0.1:8700          :::*                    LISTEN      1000       786271      8323/java       
tcp6       0      0 :::59012                :::*                    LISTEN      117        16756       1316/java  

when I stop-all.sh

    hduser@sepdau:~$ stop-all.sh
no jobtracker to stop
localhost: no tasktracker to stop
no namenode to stop
localhost: no datanode to stop
localhost: no secondarynamenode to stop

in my hosts file

hduser@sepdau:~$ cat /etc/hosts

127.0.0.1       localhost
127.0.1.1   sepdau.com



# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

file slave : localhost master: localhost

here is some log

    hduser@sepdau:/home/sepdau$ start-all.sh
mkdir: cannot create directory `/var/run/hadoop': Permission denied
starting namenode, logging to /var/log/hadoop/hduser/hadoop-hduser-namenode-sepdau.com.out
/usr/sbin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-hduser-namenode.pid: No such file or directory
localhost: mkdir: cannot create directory `/var/run/hadoop': Permission denied
localhost: starting datanode, logging to /var/log/hadoop/hduser/hadoop-hduser-datanode-sepdau.com.out
localhost: /usr/sbin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-hduser-datanode.pid: No such file or directory
localhost: mkdir: cannot create directory `/var/run/hadoop': Permission denied
localhost: starting secondarynamenode, logging to /var/log/hadoop/hduser/hadoop-hduser-secondarynamenode-sepdau.com.out
localhost: /usr/sbin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-hduser-secondarynamenode.pid: No such file or directory
mkdir: cannot create directory `/var/run/hadoop': Permission denied
starting jobtracker, logging to /var/log/hadoop/hduser/hadoop-hduser-jobtracker-sepdau.com.out
/usr/sbin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-hduser-jobtracker.pid: No such file or directory
localhost: mkdir: cannot create directory `/var/run/hadoop': Permission denied
localhost: starting tasktracker, logging to /var/log/hadoop/hduser/hadoop-hduser-tasktracker-sepdau.com.out
localhost: /usr/sbin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-hduser-tasktracker.pid: No such file or directory

I use with root user but it have same problem

what I am wrong in here. How to connect to eclipse with hadoop plugin. thank for advance

解决方案

Try adding

<property>
  <name>dfs.name.dir</name>
   <value>/home/abhinav/hdfs</value>
 </property>

to hdfs-site.xml and make sure that it exists

I have writted a small tutorial for this. See if this helps http://blog.abhinavmathur.net/2013/01/experience-with-setting-multinode.html

这篇关于Hadoop错误无法启动--all.sh的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆