Hadoop 2.x - 如何配置辅助名称节点? [英] Hadoop 2.x -- how to configure secondary namenode?

查看:610
本文介绍了Hadoop 2.x - 如何配置辅助名称节点?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个旧Hadoop安装,我希望更新到Hadoop 2.在
旧安装中,我有一个$ HADOOP_HOME / conf / masters文件,它指定
辅助名称节点。 / p>

仔细查看Hadoop 2文档,我找不到任何关于
主文件的提及,或者如何设置辅助名称节点。



任何帮助正确的方向将不胜感激。

解决方案

奴隶和主人conf文件夹中的文件仅用于bin文件夹中的一些脚本,如start-mapred.sh,start-dfs.sh和start-all.sh脚本。

这些脚本仅仅是一种便利,您可以将它们从单个节点运行到每个主/从节点的ssh中,并启动所需的hadoop服务守护进程。



如果您打算从这个单一节点启动群集(使用无密码ssh),则只需要名称节点机器上的这些文件。



另外,您还可以通过

  bin / hadoop在机器上手动启动Hadoop守护进程-daemon.sh启动[namenode | secondarynamenode | datanode | jobtracker | tasktracker] 

为了运行辅助名称节点,请在指定的机器上使用上述脚本'secondarynamenode'值给脚本


I have an old Hadoop install that I'm looking to update to Hadoop 2. In the old setup, I have a $HADOOP_HOME/conf/masters file that specifies the secondary namenode.

Looking through the Hadoop 2 documentation I can't find any mention of a "masters" file, or how to setup a secondary namenode.

Any help in the right direction would be appreciated.

解决方案

The slaves and masters files in the conf folder are only used by some scripts in the bin folder like start-mapred.sh, start-dfs.sh and start-all.sh scripts.

These scripts are a mere convenience so that you can run them from a single node to ssh into each master / slave node and start the desired hadoop service daemons.

You only need these files on the name node machine if you intend to launch your cluster from this single node (using password-less ssh).

Alternatively, You can also start an Hadoop daemon manually on a machine via

bin/hadoop-daemon.sh start [namenode | secondarynamenode | datanode | jobtracker | tasktracker]

In order to run the secondary name node, use the above script on the designated machines providing the 'secondarynamenode' value to the script

这篇关于Hadoop 2.x - 如何配置辅助名称节点?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆