"start-all.sh";和"start-dfs.sh";从主节点不启动从属节点服务? [英] "start-all.sh" and "start-dfs.sh" from master node do not start the slave node services?

查看:353
本文介绍了"start-all.sh";和"start-dfs.sh";从主节点不启动从属节点服务?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已使用从节点的主机名更新了Hadoop主节点上的/conf/slaves文件,但无法从主节点启动从节点.我必须单独启动从站,然后我的5节点群集已启动并正在运行.如何通过主节点上的单个命令启动整个集群?

I have updated the /conf/slaves file on the Hadoop master node with the hostnames of my slave nodes, but I'm not able to start the slaves from the master. I have to individually start the slaves, and then my 5-node cluster is up and running. How can I start the whole cluster with a single command from the master node?

此外,SecondaryNameNode正在所有从属服务器上运行.那是问题吗?如果是这样,我如何将它们从奴隶中删除?我认为在具有一个NameNode的群集中应该只有一个SecondaryNameNode,对吗?

Also, SecondaryNameNode is running on all the slaves. Is that a problem? If so, how can I remove them from the slaves? I think there should only be one SecondaryNameNode in a cluster with one NameNode, am I right?

谢谢!

推荐答案

在Apache Hadoop 3.0中,使用$HADOOP_HOME/etc/hadoop/workers文件每行添加一个从属节点.

In Apache Hadoop 3.0 use $HADOOP_HOME/etc/hadoop/workers file to add slave nodes one per line.

这篇关于"start-all.sh";和"start-dfs.sh";从主节点不启动从属节点服务?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆