Hadoop master无法使用不同的$ HADOOP_HOME启动slave [英] Hadoop master cannot start slave with different $HADOOP_HOME

查看:357
本文介绍了Hadoop master无法使用不同的$ HADOOP_HOME启动slave的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在master中, $ HADOOP_HOME / home / a / hadoop ,从属的 $ HADOOP_HOME / home / b / hadoop



尝试使用 start-all.sh ,那么主名称节点会成功启动,但无法启动slave的数据节点,并显示以下消息:

  b@192.068.0.2:bash:line 0:cd:/ home / b / hadoop / libexec / ..:没有这样的文件或目录
b @ 192.068.0.2:bash:/home/b/hadoop/bin/hadoop-daemon.sh:没有这样的文件或目录

有关如何在主配置中为从属设备指定 $ HADOOP_HOME 的任何想法?

解决方案

我不知道如何为来自主服务器的各种从服务器配置不同的主目录,但Hadoop FAQ指出Hadoop框架不需要ssh,并且DataNode和TaskTracker守护程序可以在每个节点上手动启动。



我会建议est写你自己的脚本来开始考虑你的节点的特定环境的东西。但是,请确保将所有从站包含在主站的从站文件中。看起来这是必要的,心跳也不足以让主人添加奴隶。


In master, the $HADOOP_HOME is /home/a/hadoop, the slave's $HADOOP_HOME is /home/b/hadoop

In master, when I try to using start-all.sh, then the master name node start successfuly, but fails to start slave's data node with following message:

b@192.068.0.2: bash: line 0: cd: /home/b/hadoop/libexec/..: No such file or directory
b@192.068.0.2: bash: /home/b/hadoop/bin/hadoop-daemon.sh: No such file or directory

any idea on how to specify the $HADOOP_HOME for slave in master configuration?

解决方案

I don't know of a way to configure different home directories for the various slaves from the master, but the Hadoop FAQ says that the Hadoop framework does not require ssh and that the DataNode and TaskTracker daemons can be started manually on each node.

I would suggest writing you own scripts to start things that take into account the specific environments of your nodes. However, make sure to include all the slaves in the master's slave file. It seems that this is necessary and that the heart beats are not enough for a master to add slaves.

这篇关于Hadoop master无法使用不同的$ HADOOP_HOME启动slave的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆