如何在运行中的docker容器上执行命令? [英] How to execute a command on a running docker container?

查看:2118
本文介绍了如何在运行中的docker容器上执行命令?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个容器运行hadoop。我有另一个Docker文件,其中包含Map-Reduce作业命令,如创建输入目录,处理默认示例,显示输出。第二个文件的基本图像是由第一个docker文件创建的hadoop_image。



编辑



Dockerfile - 对于hadoop

  #base image是ubuntu:precise 
#cdh安装
#hadoop -0.20-conf-pseudo安装
#CMD to start-all.sh

all.sh

 #启动/etc/init.d/hadoop-*下的所有服务

由此创建的hadoop基本图像。



Dockerfile2

  #base image was hadoop 
#flume-ng and flume-ng agent installation
#conf change
# flume -start.sh

flume-start.sh

  #start flume services 

我正在运行容器分开。工作正常但是如果我运行

  docker run -it flume_service 

它启动flume并显示一个bash提示符[/ bin / bash是flume-start.sh的最后一行]。我执行

  hadoop fs -ls / 

在第二个运行容器中,我收到以下错误



ls:从514fa776649a / 172.17.5.188调用到localhost: 8020连接异常失败:java.net.ConnectException:Connection refused;有关详细信息,请参阅: http://wiki.apache.org/hadoop/ConnectionRefused



我知道我收到此错误是因为hadoop服务尚未启动。但我的怀疑是我的第一个容器正在运行。我用这个作为第二个容器的基础图像。那为什么我得到这个错误?是否需要在flume contianer的hdfs-site.xml文件中更改任何内容?



伪分布式模式的安装。



任何建议?



或者我需要暴露任何端口,像这样吗?如果是,请提供一个例子



编辑2

  iptables -t nat -L -n 

我看到

  sudo iptables -t nat -L -n 
链条PREROUTING(policy ACCEPT)
target prot opt source destination
DOCKER all - 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-

Chain POSTROUTING(policy ACCEPT)
target prot opt source destination
MASQUERADE tcp - 192.168.122.0/24!192.168.122.0/24 masq ports:1024-6
MASQUERADE udp - 192.168.122.0/24!192.168.122.0/24 masq ports:1024-6
MASQUERADE all - 192.168.122.0/24!192.168.122.0/24
MASQUERADE全部 - 172.17.0.0/16 0.0.0.0/0

链式OUTPUT(policy ACCEPT)
target prot opt源目的地
DOCKER全部 - 0.0.0.0/0!127.0.0.0/8 ADDRTYPE匹配dst-

链DOCKER(2参考)
目标prot opt源目的地

它是在docker @ domian。不在容器内。



编辑
请参阅surazj'answer

解决方案

我想我遇到了同样的问题。我不能在docker1中通过hadoop命令start-all.sh启动hadoop namenode和datanode。



这是因为它通过hadoop-daemons.sh启动namenode和datanode,但它失败了。真正的问题是ssh在码头工作中不起作用。



所以,您可以执行




  • 解决方案1):

    将所有术语daemons.sh替换为start-dfs.sh中的daemon.sh,
    运行start-dfs.sh


  • (解决方案2):do



    $ HADOOP_PREFIX / sbin / hadoop-daemon.sh start datanode
    $ HADOOP_PREFIX / sbin / hadoop-daemon.sh start namenode




您可以看到datanode和namenode正常工作命令jps



问候。


I have a container running hadoop. I have another docker file which contains Map-Reduce job commands like creating input directory, processing a default example, displaying output. Base image for the second file is hadoop_image created from first docker file.

EDIT

Dockerfile - for hadoop

 #base image is ubuntu:precise
 #cdh installation
 #hadoop-0.20-conf-pseudo installation
 #CMD to start-all.sh

start-all.sh

 #start all the services under /etc/init.d/hadoop-*

hadoop base image created from this.

Dockerfile2

 #base image is hadoop
 #flume-ng and flume-ng agent installation
 #conf change
 #flume-start.sh

flume-start.sh

#start flume services

I am running both containers separately. It works fine. But if i run

docker run -it flume_service

it starts flume and show me a bash prompt [/bin/bash is the last line of flume-start.sh]. The i execute

hadoop fs -ls /

in the second running container, i am getting the following error

ls: Call From 514fa776649a/172.17.5.188 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused

I understand i am getting this error because hadoop services are not started yet. But my doubt is my first container is running. I am using this as base image for second container. Then why am i getting this error? Do i need to change anything in hdfs-site.xml file on flume contianer?

Pseudo-Distributed mode installation.

Any suggestions?

Or Do i need to expose any ports and like so? If so, please provide me an example

EDIT 2

  iptables -t nat -L -n

I see

  sudo iptables -t nat -L -n
  Chain PREROUTING (policy ACCEPT)
  target     prot opt source               destination
  DOCKER     all  --  0.0.0.0/0            0.0.0.0/0           ADDRTYPE match dst-

  Chain POSTROUTING (policy ACCEPT)
  target     prot opt source               destination
  MASQUERADE  tcp  --  192.168.122.0/24    !192.168.122.0/24    masq ports: 1024-6
  MASQUERADE  udp  --  192.168.122.0/24    !192.168.122.0/24    masq ports: 1024-6
  MASQUERADE  all  --  192.168.122.0/24    !192.168.122.0/24
  MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0

  Chain OUTPUT (policy ACCEPT)
  target     prot opt source               destination
  DOCKER     all  --  0.0.0.0/0           !127.0.0.0/8         ADDRTYPE match dst-

 Chain DOCKER (2 references)
 target     prot opt source               destination

It is in docker@domian. Not inside a container.

EDIT See last comment under surazj' answer

解决方案

I think I met the same problem yet. I either can't start hadoop namenode and datanode by hadoop command "start-all.sh" in docker1.

That is because it launch namenode and datanode through "hadoop-daemons.sh" but it failed. The real problem is "ssh" is not work in docker.

So, you can do either

  • (solution 1) :
    Replace all terms "daemons.sh" to "daemon.sh" in start-dfs.sh, than run start-dfs.sh

  • (solution 2) : do

    $HADOOP_PREFIX/sbin/hadoop-daemon.sh start datanode $HADOOP_PREFIX/sbin/hadoop-daemon.sh start namenode

You can see datanode and namenode are working fine by command "jps"

Regards.

这篇关于如何在运行中的docker容器上执行命令?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆