docker0 和 eth0 是什么关系? [英] What is the relation between docker0 and eth0?

查看:77
本文介绍了docker0 和 eth0 是什么关系?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我知道docker默认创建了一个虚拟网桥docker0,所有的容器网络都链接到docker0.

如上图所示:

  • 容器 eth0vethXXX
  • 配对
  • vethXXX 链接到 docker0 与链接到 switch 的机器相同

但是docker0 和主机eth0 之间是什么关系?更具体地说:

  1. 当一个数据包从容器流向 docker0 时,它怎么知道它会被转发到 eth0,然后再到外界?
  2. 当外部数据包到达 eth0 时,为什么将其转发到 docker0 然后转发到容器?而不是处理它或丢弃它?

问题 2 可能有点令人困惑,我会保留它并进一步解释:

  • 是容器初始化的返回包(在问题1中):由于外部不知道容器网络,因此将数据包发送到主机eth0.它如何转发到容器?我的意思是,肯定有什么地方可以存储这些信息,我该如何查看?

提前致谢!

<小时>

看了答案和官网文章,我觉得下图更准确,docker0eth0没有直接链接,而是可以转发数据包:

http://dockerone.com/uploads/article/20150527/e8497.ac63c8df0ac6d109c35786a/a>

默认 docker0 网桥和主机以太网设备之间没有直接链接.如果您对容器使用 --net=host 选项,则主机网络堆栈将在容器中可用.

<块引用>

当一个数据包从容器流向 docker0 时,它怎么知道它会被转发到 eth0,然后再到外界?

docker0 桥有分配给它的 Docker 网络的 .1 地址,这通常是 172.17 或 172.18 左右.

$ ip 地址 show dev docker08: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdisc noqueue state UP group default链接/以太 02:42:03:47:33:c1 brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 范围全局 docker0valid_lft 永远首选_lft 永远

容器被分配了一个连接到 docker0 网桥的 veth 接口.

$ 桥接链接10: vethcece7e5 状态 UP @(null): <BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 主 docker0 状态转发优先级 32 成本 2

在默认 Docker 网络上创建的容器接收 .1 地址作为它们的默认路由.

$ docker run busybox ip route show默认通过 172.17.0.1 dev eth0172.17.0.0/16 开发 eth0 src 172.17.0.3

Docker 使用 NAT MASQUERADE 处理来自那里的出站流量,它将遵循主机上的标准出站路由,这可能涉及也可能不涉及 eth0.

$ iptables -t nat -vnL POSTROUTING链POSTROUTING(策略接受0个数据包,0个字节)pkts 字节目标 prot 选择退出源目标0 0 全部伪装 -- * !docker0 172.17.0.0/16 0.0.0.0/0

iptables 处理连接跟踪和返回流量.

<块引用>

当外部数据包到达 eth0 时,为什么将其转发到 docker0 然后转发到容器?而不是处理它或丢弃它?

如果您要询问来自容器的出站流量的返回路径,请参阅上面的 iptables,因为 MASQUERADE 会将连接映射回来.

如果您的意思是新的入站流量,默认情况下不会将数据包转发到容器中.实现此目的的标准方法是设置端口映射.Docker 启动一个守护进程,它在端口 X 上侦听主机并在端口 Y 上转发到容器.

我不知道为什么 NAT 也没有用于入站流量.我在尝试将大量端口映射到容器时遇到了一些问题,导致 将现实世界的接口完全映射到容器中.

I know by default docker creates a virtual bridge docker0, and all container network are linked to docker0.

As illustrated above:

  • container eth0 is paired with vethXXX
  • vethXXX is linked to docker0 same as a machine linked to switch

But what is the relation between docker0 and host eth0? More specifically:

  1. When a packet flows from container to docker0, how does it know it will be forwarded to eth0, and then to the outside world?
  2. When an external packet arrives to eth0, why it is forwarded to docker0 then container? instead of processing it or drop it?

Question 2 can be a little confusing, I will keep it there and explained a little more:

  • It is a return packet that initialed by container(in question 1): since the outside does not know container network, the packet is sent to host eth0. How it is forwarded to container? I mean, there must be some place to store the information, how can I check it?

Thanks in advance!


After reading the answer and official network articles, I find the following diagram more accurate that docker0 and eth0 has no direct link,instead they can forward packets:

http://dockerone.com/uploads/article/20150527/e84946a8e9df0ac6d109c35786ac4833.png

解决方案

There is no direct link between the default docker0 bridge and the hosts ethernet devices. If you use the --net=host option for a container then the hosts network stack will be available in the container.

When a packet flows from container to docker0, how does it know it will be forwarded to eth0, and then to the outside world?

The docker0 bridge has the .1 address of the Docker network assigned to it, this is usually something around a 172.17 or 172.18.

$ ip address show dev docker0
8: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:03:47:33:c1 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever

Containers are assigned a veth interface which is attached to the docker0 bridge.

$ bridge link
10: vethcece7e5 state UP @(null): <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master docker0 state forwarding priority 32 cost 2

Containers created on the default Docker network receive the .1 address as their default route.

$ docker run busybox ip route show
default via 172.17.0.1 dev eth0 
172.17.0.0/16 dev eth0  src 172.17.0.3 

Docker uses NAT MASQUERADE for outbound traffic from there and it will follow the standard outbound routing on the host, which may or may not involve eth0.

$ iptables -t nat -vnL POSTROUTING
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0  

iptables handles the connection tracking and return traffic.

When an external packet arrives to eth0, why it is forwarded to docker0 then container? instead of processing it or drop it?

If you are asking about the return path for outbound traffic from the container, see iptables above as the MASQUERADE will map the connection back through.

If you mean new inbound traffic, Packets are not forwarded into a container by default. The standard way to achieve this is to setup a port mapping. Docker launches a daemon that listens on the host on port X and forwards to the container on port Y.

I'm not sure why NAT wasn't used for inbound traffic as well. I've run into some issues trying to map large numbers of ports into containers which led to mapping real world interfaces completely into containers.

这篇关于docker0 和 eth0 是什么关系?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆