无法在Hadoop多节点群集中启动start-dfs.sh [英] unable to start start-dfs.sh in Hadoop Multinode cluster
问题描述
我创建了一个hadoop多节点群集,并且还在主节点和从属节点中配置了SSH,现在我可以在主节点上无密码地连接到从节点。
但是,当我尝试到主节点start-dfs.sh我无法连接到从节点执行停在下面的行
log:
HNname @ master:〜$ start-all.sh
启动namenode,记录到/usr/local/hadoop/libexec/../logs/hadoop -HNname -namenode-master.out
HDnode @ slave的密码:master:启动datanode,记录到/usr/local/hadoop/libexec/../logs/hadoop-HNname-datanode-master.out
我按下Enter键
slave:由192.168.0.2关闭的连接
master:启动secondarynamenode,记录到/usr/local/hadoop/libexec/../logs/hadoop-HNname-secondarynamenode-master.out
jobtracker作为进程10396运行。首先停止它。
HDnode @ slave的密码:master:启动tasktracker,记录到/usr/local/hadoop/libexec/../logs/hadoop-HNname-tasktracker-master.out
slave:Permission否认,请再试一次。
HDnode @从机密码:
输入从机密码后,连接关闭
以下是我尝试过但没有结果的内容:
- 格式化的namenode在master&从节点
- 创建新的ssh密钥并在两个节点中配置
- 覆盖默认的HADOOP_LOG_DIR形式这篇文章
我想你错过了这一步将SSH公钥添加到目标主机上的authorized_keys文件中
正确地重新设置无密码的ssh步骤。请遵循以下内容:
-
生成公钥和私钥SSH密钥
<$ p $复制SSH公用密钥(> ssh-keygen
code> id_rsa.pub )到您的 -
您的目标
主机上的authorized_keys
文件的SSH公钥
cat id_rsa.pub>> authorized_keys
-
根据您的SSH版本,您可能需要设置
目录中的.ssh
目录(至700)和authorized_keys
文件(至600)
chmod 700〜/ .ssh
chmod 600〜/ .ssh / authorized_keys
-
检查连线:
ssh root @< remote.target.host>
其中
< remote.target.host>
具有群集中每个主机名称的值。
如果在第一个
连接期间显示以下警告消息:是否确定要继续连接(是/否)?
输入是。
目标主机上的根帐户
.ssh / id_rsa
.ssh / id_rsa.pub
请参阅:设置无密码SSH
注意:如果无密码的ssh设置正确,密码将不会被询问。
I have created a hadoop multinode cluster and also configured SSH in both master and slave nodes now i can connect to slave without password in master node
But when i try to start-dfs.sh in master node I'm unable to connect to slave node the execution stops at below line
log:
HNname@master:~$ start-all.sh
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-HNname-namenode-master.out
HDnode@slave's password: master: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-HNname-datanode-master.out
I pressed Enter
slave: Connection closed by 192.168.0.2
master: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-HNname-secondarynamenode-master.out
jobtracker running as process 10396. Stop it first.
HDnode@slave's password: master: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-HNname-tasktracker-master.out
slave: Permission denied, please try again.
HDnode@slave's password:
after entering the slave password the connection is closed
Below things I have tried but no results:
- formatted namenode in both master & slave node
- created new ssh key and configured in both the nodes
- override the default HADOOP_LOG_DIR form the this post
I think you missed this step "Add the SSH Public Key to the authorized_keys file on your target hosts"
Just redo the password-less ssh step correctly. Follow this:
Generate public and private SSH keys
ssh-keygen
Copy the SSH Public Key (
id_rsa.pub
) to the root account on your target hosts.ssh/id_rsa .ssh/id_rsa.pub
Add the SSH Public Key to the
authorized_keys
file on your target hostscat id_rsa.pub >> authorized_keys
Depending on your version of SSH, you may need to set permissions on the
.ssh
directory (to 700) and theauthorized_keys
file in that directory (to 600) on the target hosts.chmod 700 ~/.ssh chmod 600 ~/.ssh/authorized_keys
Check the connection:
ssh root@<remote.target.host>
where
<remote.target.host>
has the value of each host name in your cluster.If the following warning message displays during your first connection: Are you sure you want to continue connecting (yes/no)?
Enter Yes.
Refer: Set Up Password-less SSH
Note: password will not be asked, if your passwordless ssh is setup properly.
这篇关于无法在Hadoop多节点群集中启动start-dfs.sh的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!