fs.defaultFS只侦听本地主机的端口8020 [英] fs.defaultFS only listens to localhost's port 8020

查看:1122
本文介绍了fs.defaultFS只侦听本地主机的端口8020的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一台CDH4.3 all-in-one虚拟机启动并正在运行,我试图远程安装hadoop客户机。我注意到,在不更改任何默认设置的情况下,我的hadoop群集正在侦听 127.0.0.1:8020

  [cloudera @ localhost〜] $ netstat -lent | grep 8020 
tcp 0 0 127.0.0.1:8020 0.0.0.0:* LISTEN 492 100202

[cloudera @ localhost〜] $ telnet $ {all-in-one vm external IP} 8020
尝试$ {all-in-one vm外部IP} ...
telnet:连接到地址$ {all-in-one vm外部IP}连接被拒绝
[cloudera @ localhost〜 ] $ telnet 127.0.0.1 8020
尝试127.0.0.1 ...
连接到127.0.0.1。
转义字符是'^]'

我的远程机器具有所有配置(<$指向 $ {all-in-one vm external IP} 的c $ c> core-site.xml,hdfs-site.xml 。当我从远程客户端运行某个东西时,我得到了这个结果:

 └$ ./bin/hdfs --config / home / $ {myself} /hadoop-2.0.0-cdh4.3.0/etc/hadoop dfs -ls 
13/10/27 05:27:53 WARN util.NativeCodeLoader:无法为您的平台加载native-hadoop库...在适用时使用builtin-java类
ls:从ubuntu / 127.0.1.1调用到$ {all-in-one vm external IP}:8020连接失败异常:java.net.ConnectException:连接被拒绝;有关更多详细信息,请参阅:http://wiki.apache.org/hadoop/ConnectionRefused

我改变了我的hadoop all-in-one vm:
core-site.xml和hdfs-site.xml > / etc / hadoop / conf from: localhost.localdomain - > $ {all-in-one vm external IP},但重新启动hdfs后,它仍然听localhost 8020.任何想法?我怎样才能让它听取8020上的$ {external IP}而不是本地主机。

你应该可以直接调整将 dfs.namenode.rpc-address 属性设置为 0.0.0.0:8020 以使NameNode客户机IPC服务监听在所有接口上,或者将其设置为您的特定IP,以使其在那里监听。



也就是说, all-in-one vm external IP 改变你的状态应该已经工作了,但是因为这个问题没有你的确切配置和日志,所以我不能说出原因。


I have a CDH4.3 all-in-one vm up and running, i am trying to install a hadoop client remotely. I noticed that, without changing any default settings, my hadoop cluster is listening to 127.0.0.1:8020.

[cloudera@localhost ~]$ netstat -lent | grep 8020
tcp        0      0 127.0.0.1:8020              0.0.0.0:*                   LISTEN      492        100202 

[cloudera@localhost ~]$ telnet ${all-in-one vm external IP} 8020
Trying ${all-in-one vm external IP}...
telnet: connect to address ${all-in-one vm external IP} Connection refused
[cloudera@localhost ~]$ telnet 127.0.0.1 8020
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'

my remote machine has all the configuration(core-site.xml, hdfs-site.xml) pointing to the ${all-in-one vm external IP}. When I run something from the remote client and I get this:

└ $ ./bin/hdfs --config /home/${myself}/hadoop-2.0.0-cdh4.3.0/etc/hadoop dfs -ls
13/10/27 05:27:53 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
ls: Call From ubuntu/127.0.1.1 to ${all-in-one vm external IP}:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

I changed my hadoop all-in-one vm: core-site.xml and hdfs-site.xml under /etc/hadoop/conf from:localhost.localdomain -> ${all-in-one vm external IP}, but after restarting hdfs, it still listens to localhost 8020. any ideas? How can I make it listen to ${external IP} on 8020 instead of localhost.

解决方案

You should be able to directly tweak the property dfs.namenode.rpc-address to be 0.0.0.0:8020 to make the NameNode Client IPC service listen on all interfaces, or set it to your specific IP to only make it listen there.

That said, the all-in-one vm external IP change you state should have worked, but since the question does not have your exact configurations and logs, I cannot tell why.

这篇关于fs.defaultFS只侦听本地主机的端口8020的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆