连接未建立 [英] Connection is not being established

查看:334
本文介绍了连接未建立的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有两个运行容器的水槽和hadoop。让它是hadoop2和flume2。我从两个图像(即hadoop_alone和flume_alone)创建了这两个容器。

  docker run -d -p 10.236.173.XX:8020 :8020 -p 10.236.173.XX:50030:50030 -p 10.236.173.XX:50060:50060 -p 10.236.173.XX:50070:50070 -p 10.236.173.XX:50075:50075 -p 10.236。 173.XX:50090:50090 -p 10.236.173.XX:50105:50105 --name hadoopservices hadoop_alone 

我进入hadoop容器并检查暴露的端口。所以所有的端口都被正确地暴露。

  docker run -d --name flumeservices -p 0.0.0.0:5140:5140  - p 0.0.0.0:44444:44444  - 链接hadoopservices:hadoopservices flume_alone 

我进入水槽容器和检查 env etc / hosts 条目。有一个条目hadoopservices和 env 变量是自动创建的。



我的 site.xml

  fs.defaultFS 
hdfs://0.0.0.0:8020

我修改了它,所以它会接收来自所有容器的8020服务。



我的来源和汇点在flume.conf中

  a2.sources.r1.type = netcat 
a2.sources.r1.bind = localhost
a2.sources.r1.port = 5140

a2.sinks.k1.type = hdfs
a2.sinks .k1.hdfs.fileType = DataStream
a2.sinks.k1.hdfs.writeFormat = Text
a2.sinks.k1.hdfs.path = hdfs:// hadoopservices:8020 / user / root / syslog /%y-%m-%d /%H%M /%S
a2.sinks.k1.hdfs.filePrefix = events
a2.sinks.k1.hdfs.roundUnit =分
a2.sinks.k1.hdfs.useLocalTimeStamp = true

更改<$ c后,我重新启动了hadoop namenode $ c> core-site.xml 。



我尝试从flume我们写入hdfs

  / usr / bin / flume-ng agent --conf-file / etc / flume-ng / conf / conf -name a2 -Dflume.root.logger = INFO,console 

它说

  INFO hdfs.DFSClient:createBlockOutputStream中的异常

java.net.ConnectException:Connection refused



所以我发现这两个contianer之间建立的连接有问题。我进入hadoop容器并检查端口连接

  netstat -tna 


tcp 0 0 127.0.0.1:52521 127.0.0.1:8020 TIME_WAIT
tcp 0 0 127.0.0.1:8020 127.0.0.1:52516 ESTABLISHED
tcp 0 0 127.0.0.1:52516 127.0.0.1:8020 ESTABLISHED

但我希望它是

  tcp 0 0 172.17.1.XX:54342 172.17.1.XX:8020 TIME_WAIT 
tcp 0 0 172.17.1.XX:54332 172.17.1.XX:8020 ESTABLISHED
tcp 0 0 172.17.1.XX:8020 172.17.1.XX:54332 ESTABLISHED

其中172.17.1.XX是我的hadoop容器的ip。



我发现原因。是否是原因?



哪些配置应该修改?和或我的运行声明?应该改变什么来建立这两个码头容器之间的连接,以便我能够从水槽写入hdfs。



如果你需要更多的信息,我会编辑它



请告诉我一些想法。

解决方案

如果有人面对同样的问题,请执行以下步骤。

  1)检查核心站点中是否更新0.0.0.0:8020 .xml 

2)如果在运行容器内更新它,**我建议大家重新启动所有的服务,而不是仅限于namenode **。 [更好地作为Dockerfile的一部分]

3)检查flume容器中的env和etc / hosts内容

4)和etc / hosts中的主机名``必须与flume.conf中的`hdfs path`参数匹配

5)进入hadoop容器并执行`netstat -tna`,你必须看到建立到< hadoop_container_ip>:8020的连接。不是你的本地主机[127.0.0.1]。

我希望对尝试链接容器和端口映射的人员有帮助。


I have two running container for flume and hadoop. Let it be hadoop2 and flume2. I created these two containers from two images namely hadoop_alone and flume_alone.

   docker run -d -p 10.236.173.XX:8020:8020 -p 10.236.173.XX:50030:50030 -p 10.236.173.XX:50060:50060 -p 10.236.173.XX:50070:50070 -p 10.236.173.XX:50075:50075 -p 10.236.173.XX:50090:50090 -p 10.236.173.XX:50105:50105 --name hadoopservices hadoop_alone

I get into hadoop container and checked for exposed ports. So All the ports are exposed properly.

    docker run -d --name flumeservices -p 0.0.0.0:5140:5140 -p 0.0.0.0:44444:44444 --link hadoopservices:hadoopservices flume_alone

I get into flume container and checked for env and etc/hosts entries. There is an entry for hadoopservices and env variables are created automatically.

My core-site.xml

 fs.defaultFS
 hdfs://0.0.0.0:8020

I modified it so it ll accept services at 8020 from all the containers.

My source and sink in flume.conf

 a2.sources.r1.type = netcat
 a2.sources.r1.bind = localhost
 a2.sources.r1.port = 5140

 a2.sinks.k1.type = hdfs
 a2.sinks.k1.hdfs.fileType = DataStream
 a2.sinks.k1.hdfs.writeFormat = Text
 a2.sinks.k1.hdfs.path = hdfs://hadoopservices:8020/user/root/syslog/%y-%m-%d/%H%M/%S
 a2.sinks.k1.hdfs.filePrefix = events
 a2.sinks.k1.hdfs.roundUnit = minute
 a2.sinks.k1.hdfs.useLocalTimeStamp = true

I restarted hadoop namenode after changing core-site.xml.

I try to write into hdfs from flume using

/usr/bin/flume-ng agent --conf-file /etc/flume-ng/conf/flume.conf --name a2 -Dflume.root.logger=INFO,console

It says

INFO hdfs.DFSClient: Exception in createBlockOutputStream

java.net.ConnectException: Connection refused

So i found something is the problem with connection established between these two contianers. I get into hadoop container and checked for port connections

netstat -tna


tcp        0      0 127.0.0.1:52521         127.0.0.1:8020          TIME_WAIT
tcp        0      0 127.0.0.1:8020          127.0.0.1:52516         ESTABLISHED
tcp        0      0 127.0.0.1:52516         127.0.0.1:8020          ESTABLISHED

But i expect it to be

tcp        0      0 172.17.1.XX:54342       172.17.1.XX:8020        TIME_WAIT
tcp        0      0 172.17.1.XX:54332       172.17.1.XX:8020        ESTABLISHED
tcp        0      0 172.17.1.XX:8020        172.17.1.XX:54332       ESTABLISHED

Where 172.17.1.XX is the ip of my hadoop container.

I found the cause. Is it the reason?

Which configuration should be modified? And or my run statement? What should be changed to establish connection between these two docker containers so that i can able to write into hdfs from flume.

If you need more info, i 'll edit it further.

Please tell me some ideas.

解决方案

If anybody face the same problem, please do the following steps.

 1) Check whether 0.0.0.0:8020 is updated in core-site.xml

 2) If you update it inside running container, **I suggest you all to restart ALL the services NOT ONLY namenode**. [better do as part of Dockerfile]

 3) Check for `env` and `etc/hosts` contents in flume container 

 4) And hostname in `etc/hosts` must be matched with the `hdfs path` parameter in flume.conf

 5) Get into hadoop container and do `netstat -tna` and you must see connection established to <hadoop_container_ip>:8020. Not to your localhost[127.0.0.1].

I hope it 'll be helpful to the people who tries to link containers and port mapping.

这篇关于连接未建立的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆