Hadoop:将多个IP地址绑定到集群NameNode [英] Hadoop: binding multiple IP addresses to a cluster NameNode

查看:1785
本文介绍了Hadoop:将多个IP地址绑定到集群NameNode的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在Softlayer上有一个四节点Hadoop集群。主(NameNode)具有用于外部访问的公共IP地址和用于集群访问的私有IP地址。从节点(datanodes)具有私有IP地址,我试图连接到主节点,而不需要为每个从属节点分配公有IP地址。



我发现设置 fs.defaultFS 到NameNode的公共地址允许外部访问,除了NameNode只监听传入连接的地址而不是私有地址。因此,我在DataNode日志中收到ConnectionRefused异常,因为它们试图与NameNode的私有IP地址连接。



我认为解决方案可能是设置public和私人IP地址到NameNode,以便外部访问被保留,并允许我的奴隶节点连接。

那么有什么办法可以将这两个地址绑定到NameNode,以便它们都能监听?



编辑:Hadoop 2.4.1版。

解决方案


$ b


在hdfs-site.xml中,将值设置为他的问题 dfs.namenode.rpc-bind-host
0.0.0.0 并且Hadoop将同时监听私有和公共
网络接口允许远程访问和datanode访问。


I've a four-node Hadoop cluster on Softlayer. The master (NameNode) has a public IP address for external access and a private IP address for cluster access. The slave nodes (datanodes) have private IP address which I'm trying to connect to the master without the need of assigning public IP addresses to each slave node.

I've realised that setting fs.defaultFS to the NameNode's public address allows for external access, except that the NameNode only listens to that address for incoming connections, not the private address. So I get ConnectionRefused exceptions in the datanode logs as they're trying to connect with the NameNode's private IP address.

I figured the solution might be to set both the public and private IP address to the NameNode so that external access is preserved and allows my slaves nodes to connect as well.

So is there a way I can bind both these addresses to the NameNode so that it will listen on both?

Edit: Hadoop version 2.4.1.

解决方案

The asker edited this into his question as an answer:

In hdfs-site.xml, set the value of dfs.namenode.rpc-bind-host to 0.0.0.0 and Hadoop will listen on both the private and public network interfaces allowing remote access and datanode access.

这篇关于Hadoop:将多个IP地址绑定到集群NameNode的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆