尝试格式化namenode时无法找到或加载主类;在MAC OS X 10.9.2上安装hadoop [英] Could not find or load main class when trying to format namenode; hadoop installation on MAC OS X 10.9.2

查看:447
本文介绍了尝试格式化namenode时无法找到或加载主类;在MAC OS X 10.9.2上安装hadoop的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图用我的MAC OS X 10.9.2搭配hadoop完成开发单节点集群设置。我已经尝试了各种在线教程,最近的教程是这个。总结我所做的:

  1)$ brew install hadoop 

这个在/usr/local/Cellar/hadoop/2.2.0中安装了hadoop 2.2.0



2 )配置的环境变量。以下是我的.bash_profile的相关部分:

  ### Java_HOME 
export JAVA_HOME =$( / usr / libexec / java_home)

## HADOOP环境变量
export HADOOP_PREFIX =/ usr / local / Cellar / hadoop / 2.2.0
export HADOOP_HOME = $ HADOOP_PREFIX
export HADOOP_COMMON_HOME = $ HADOOP_PREFIX
export HADOOP_CONF_DIR = $ HADOOP_PREFIX / libexec / etc / hadoop
export HADOOP_HDFS_HOME = $ HADOOP_PREFIX $ b $ export HADOOP_MAPRED_HOME = $ HADOOP_PREFIX $ b $ export HADOOP_YARN_HOME = $ HADOOP_PREFIX

导出CLASSPATH = $ CLASSPATH :.
export CLASSPATH = $ CLASSPATH:$ HADOOP_HOME / libexec / share / hadoop / common / hadoop-common-2.2.0.jar
export CLASSPATH = $ CLASSPATH:$ HADOOP_HOME / libexec / share / hadoop / hdfs /hadoop-hdfs-2.2.0.jar



<3>配置的HDFS

 < configuration> 
<属性>
< name> dfs.datanode.data.dir< / name>
< value> file:///usr/local/Cellar/hadoop/2.2.0/hdfs/datanode< / value>
< description>在应该存储其块的DataNode的本地文件系统上的逗号分隔的路径列表。< / description>
< / property>

<属性>
<名称> dfs.namenode.name.dir< /名称>
< value> file:///usr/local/Cellar/hadoop/2.2.0/hdfs/namenode< / value>
< description> NameNode持久存储名称空间和事务日志的本地文件系统上的路径。< / description>
< / property>
< / configuration>



<3>配置的core-site.xml


 <! - 让Hadoop模块知道HDFS NameNode的位置! - > 
<属性>
<名称> fs.defaultFS< / name>
< value> hdfs:// localhost /< / value>
< description> NameNode URI< / description>
< / property>

4)配置yarn-site.xml

 <结构> 
<属性>
< name> yarn.scheduler.minimum-allocation-mb< / name>
<值> 128< /值>
< description>在资源管理器中分配给每个容器请求的最小内存限制< / description>
< / property>
<属性>
< name> yarn.scheduler.maximum-allocation-mb< / name>
<值> 2048< /值>
< description>在资源管理器中分配给每个容器请求的最大内存限制。< / description>
< / property>
<属性>
< name> yarn.scheduler.minimum-allocation-vcores< / name>
<值> 1< /值>
< description>根据虚拟CPU核心,RM中每个容器请求的最小分配。低于此值的请求不会生效,并且指定的值将被分配到最小值。< / description>
< / property>
<属性>
< name> yarn.scheduler.maximum-allocation-vcores< / name>
<值> 2< /值>
< description>根据虚拟CPU核心,RM中每个容器请求的最大分配。高于此值的请求不会生效,并且将被限制为该值。 < /描述>
< / property>
<属性>
< name> yarn.nodemanager.resource.memory-mb< / name>
<值> 4096< /值>
< description>物理内存(以MB为单位)可用于运行容器< / description>
< / property>
<属性>
< name> yarn.nodemanager.resource.cpu-vcores< / name>
<值> 2< /值>
< description>可为容器分配的CPU核心数量。< / description>
< / property>
< / configuration>

5)然后我尝试使用以下格式来设置名称节点:

  $ HADOOP_PREFIX / bin / hdfs namenode -format 



<这给了我错误:错误:无法找到或加载主类org.apache.hadoop.hdfs.server.namenode.NameNode。



我查看了hdfs代码,运行它的那一行基本上等于调用

  $ java org.apache.hadoop.hdfs.server.namenode.NameNode。 

所以我认为这是一个类路径问题,我尝试了几件事情:

a)将hadoop-common-2.2.0.jar和hadoop-hdfs-2.2.0.jar添加到类路径中,如上面在我的.bash_profile脚本中所见



添加行

  export PATH = $ PATH:$ HADOOP_HOME / bin:$ HADOOP_HOME / sbin 

加入我的.bash_profile的建议本教程。(我后来删除它,因为它似乎没有任何帮助)

c)我还考虑编写一个shell脚本,将$ HADOOP_HOME / libexec / share / hadoop中的每个jar添加到$ HADOOP_CLASSPATH,但这似乎是不必要的,并且容易出现未来问题。


任何想法为什么我总是收到错误:无法找到或加载主类org.apache.hadoop.hdfs.server.namenode.NameNode?提前致谢。

解决方案

由于brew包的布局方式,您需要将HADOOP_PREFIX指向包中的libexec文件夹:

  export HADOOP_PREFIX =/ usr / local / Cellar / hadoop / 2.2.0 / libexec

然后,您将从conf目录的声明中移除libexec:

  export HADOOP_CONF_DIR = $ HADOOP_PREFIX / etc / hadoop 


I'm trying to get a development single-node cluster setup done on my MAC OS X 10.9.2 with hadoop. I've tried various online tutorials, with the most recent being this one. To summarize what I did:

1) $ brew install hadoop

This installed hadoop 2.2.0 in /usr/local/Cellar/hadoop/2.2.0

2) Configured Environment Variables. Here's what the relevant part of my .bash_profile looks like:

### Java_HOME 
export JAVA_HOME="$(/usr/libexec/java_home)"

### HADOOP Environment variables
export HADOOP_PREFIX="/usr/local/Cellar/hadoop/2.2.0"
export HADOOP_HOME=$HADOOP_PREFIX
export HADOOP_COMMON_HOME=$HADOOP_PREFIX
export HADOOP_CONF_DIR=$HADOOP_PREFIX/libexec/etc/hadoop
export HADOOP_HDFS_HOME=$HADOOP_PREFIX
export HADOOP_MAPRED_HOME=$HADOOP_PREFIX
export HADOOP_YARN_HOME=$HADOOP_PREFIX

export CLASSPATH=$CLASSPATH:.
export CLASSPATH=$CLASSPATH:$HADOOP_HOME/libexec/share/hadoop/common/hadoop-common-2.2.0.jar
export CLASSPATH=$CLASSPATH:$HADOOP_HOME/libexec/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar

3) Configured HDFS

<configuration>
  <property>
<name>dfs.datanode.data.dir</name>
<value>file:///usr/local/Cellar/hadoop/2.2.0/hdfs/datanode</value>
<description>Comma separated list of paths on the local filesystem of a DataNode where it should store its blocks.</description>
  </property>

  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///usr/local/Cellar/hadoop/2.2.0/hdfs/namenode</value>
    <description>Path on the local filesystem where the NameNode stores the namespace and transaction logs persistently.</description>
  </property>
</configuration>

3) Configured core-site.xml

<!-- Let Hadoop modules know where the HDFS NameNode is at! -->
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://localhost/</value>
    <description>NameNode URI</description>
  </property>

4) Configured yarn-site.xml

<configuration>
   <property>
    <name>yarn.scheduler.minimum-allocation-mb</name>
    <value>128</value>
    <description>Minimum limit of memory to allocate to each container request at the Resource Manager.</description>
  </property>
  <property>
    <name>yarn.scheduler.maximum-allocation-mb</name>
    <value>2048</value>
    <description>Maximum limit of memory to allocate to each container request at the Resource Manager.</description>
  </property>
  <property>
    <name>yarn.scheduler.minimum-allocation-vcores</name>
    <value>1</value>
    <description>The minimum allocation for every container request at the RM, in terms of virtual CPU cores. Requests lower than this won't take effect, and the specified value will get allocated the minimum.</description>
  </property>
  <property>
    <name>yarn.scheduler.maximum-allocation-vcores</name>
    <value>2</value>
    <description>The maximum allocation for every container request at the RM, in terms of virtual CPU cores. Requests higher than this won't take effect, and will get capped to this value.     </description>
  </property>
  <property>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>4096</value>
    <description>Physical memory, in MB, to be made available to running containers</description>
  </property>
  <property>
    <name>yarn.nodemanager.resource.cpu-vcores</name>
    <value>2</value>
    <description>Number of CPU cores that can be allocated for containers.</description>
  </property>
</configuration>

5) Then I tried to format the namenode using:

$HADOOP_PREFIX/bin/hdfs namenode -format

This gives me the error: Error: Could not find or load main class org.apache.hadoop.hdfs.server.namenode.NameNode.

I looked at the hdfs code, and the line that runs it basically amounts to calling

$java org.apache.hadoop.hdfs.server.namenode.NameNode.

So thinking this was a classpath issue, I tried a few things

a) adding hadoop-common-2.2.0.jar and hadoop-hdfs-2.2.0.jar to the classpath as you can see above in my .bash_profile script

b) adding the line

export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

to my .bash_profile on the recommendation of this tutorial.(I later removed it because it didn't seem to help anything)

c) I also considered writing a shell script that adds every jar in $HADOOP_HOME/libexec/share/hadoop to the $HADOOP_CLASSPATH, but this just seemed unnecessary and prone to future problems.

Any idea why I keep getting the Error: Could not find or load main class org.apache.hadoop.hdfs.server.namenode.NameNode ? Thanks in advance.

解决方案

Due to the way the brew package is laid out, you need to point the HADOOP_PREFIX to the libexec folder in the package:

export HADOOP_PREFIX="/usr/local/Cellar/hadoop/2.2.0/libexec"

You would then remove the libexec from your declaration of the conf directory:

export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop

这篇关于尝试格式化namenode时无法找到或加载主类;在MAC OS X 10.9.2上安装hadoop的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆