网络桥`docker0`在法兰绒的k8s中起什么作用 [英] what role does network bridge `docker0` play in k8s with flannel
问题描述
k8s版本:v1.10.4
绒布版本:v0.10.0
docker版本v1.12.6
k8s version: v1.10.4
flannel version: v0.10.0
docker version v1.12.6
当我在节点上使用命令brctl show
时,它显示为波纹管:
when i use command brctl show
on node,it shows as bellow:
[root@node03 tmp]# brctl show
bridge name bridge id STP enabled interfaces
cni0 8000.0a580af40501 no veth39711246
veth591ea0bf
veth5b889fed
veth61dfc48a
veth6ef58804
veth75f5ef36
vethc162dc8a
docker0 8000.0242dfd605c0 no
它显示vethXXX在名为cni0的网桥上绑定,但是当我使用命令`ip addr`时,它显示:
it shows that the vethXXX are binding on network bridge named cni0, but when i use command `ip addr`,it shows :
[root@node03 tmp]# ip addr |grep veth
6: veth61dfc48a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
7: veth591ea0bf@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
9: veth6ef58804@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
46: vethc162dc8a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
55: veth5b889fed@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
61: veth75f5ef36@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
78: veth39711246@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
这些veth都绑定在`if3`上,但是`if3`不是cni0.它是`docker0`.
these veth are all binding on `if3` ,but `if3` is not cni0.it is `docker0`
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
看来网桥docker0
是无用的,但是ip addr
显示所有veth设备都已绑定在其上.网桥docker0
在带有绒布的k8s中起什么作用?谢谢
it seems that network bridge docker0
is useless, but ip addr
shows that all veth device are binding on it . what role does network bridge docker0
play in k8s with flannel? thanks
推荐答案
这里有Docker和Kubernetes两个网络模型.
There are two network models here Docker and Kubernetes.
Docker模型
默认情况下,Docker使用主机-专用网络.它创建一个虚拟网桥,默认情况下称为
docker0
,并从该桥接器的RFC1918 .对于Docker创建的每个容器,它都会分配一个虚拟以太网设备(称为veth
),该设备连接到网桥.使用Linux名称空间,veth映射为在容器中显示为eth0
.容器内的eth0
接口从网桥的地址范围获得IP地址.
By default, Docker uses host-private networking. It creates a virtual bridge, called
docker0
by default, and allocates a subnet from one of the private address blocks defined in RFC1918 for that bridge. For each container that Docker creates, it allocates a virtual Ethernet device (calledveth
) which is attached to the bridge. The veth is mapped to appear aseth0
in the container, using Linux namespaces. The in-containereth0
interface is given an IP address from the bridge’s address range.
结果是Docker容器只有在其他容器位于同一台机器上(也就是同一虚拟桥)时才能与其他容器通信. 不同计算机上的容器无法互相访问-实际上,它们最终可能具有完全相同的网络范围和IP地址.
The result is that Docker containers can talk to other containers only if they are on the same machine (and thus the same virtual bridge). Containers on different machines can not reach each other - in fact they may end up with the exact same network ranges and IP addresses.
Kubernetes模型
Kubernetes model
Kubernetes对任何联网实施都施加以下基本要求(除非有任何故意的网络分段策略):
Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):
- 所有容器无需NAT即可与所有其他容器通信
- 所有节点都可以在不使用NAT的情况下与所有容器通信(反之亦然)
- 容器所看到的IP与其他人所看到的IP
- all containers can communicate with all other containers without NAT
- all nodes can communicate with all containers (and vice-versa) without NAT
- the IP that a container sees itself as is the same IP that others see it as
Kubernetes在Pod
范围内应用IP地址-Pod
中的容器共享其网络名称空间-包括其IP地址.这意味着Pod
中的容器可以全部到达localhost
上彼此的端口.这确实暗示Pod
中的容器必须协调端口使用,但这与VM中的进程没有什么不同.这称为每脚IP"模型.这是使用Docker作为"pod容器"实现的,该容器使网络名称空间保持打开状态,而应用容器"(用户指定的内容)通过Docker的--net=container:<id>
函数加入该名称空间.
Kubernetes applies IP addresses at the Pod
scope - containers within a Pod
share their network namespaces - including their IP address. This means that containers within a Pod
can all reach each other’s ports on localhost
. This does imply that containers within a Pod
must coordinate port usage, but this is no different than processes in a VM. This is called the "IP-per-pod" model. This is implemented, using Docker, as a "pod container" which holds the network namespace open while "app containers" (the things the user specified) join that namespace with Docker’s --net=container:<id>
function.
与Docker一样,可以请求主机端口,但这被简化为利基操作.在这种情况下,将在主机Node
上分配端口,并将流量转发到Pod
. Pod
本身看不到主机端口的存在或不存在.
As with Docker, it is possible to request host ports, but this is reduced to a very niche operation. In this case a port will be allocated on the host Node
and traffic will be forwarded to the Pod
. The Pod
itself is blind to the existence or non-existence of host ports.
为了将平台与基础网络基础架构集成,Kubernetes提供了一个名为容器网络接口(CNI).如果满足了Kubernetes的基本要求,则供应商可以根据需要使用网络堆栈,通常使用覆盖网络来支持多子网和 multi-az 集群.
In order to integrate the platform with the underlying network infrastructure Kubernetes provide a plugin specification called Container Networking Interface (CNI). If the Kubernetes fundamental requirements are met vendors can use network stack as they like, typically using overlay networks to support multi-subnet and multi-az clusters.
Bellow显示了如何通过法兰来实现重叠网络,法兰是一种流行的
Bellow is shown how overlay networks are implemented through Flannel which is a popular CNI.
您可以详细了解其他CNI的集群网络文档中介绍了Kubernetes方法.我还建议阅读法兰绒如何有效,另外另一个中
You can read more about other CNI's here. The Kubernetes approach is explained in Cluster Networking docs. I also recommend reading Kubernetes Is Hard: Why EKS Makes It Easier for Network and Security Architects which explains how Flannel works, also another article from Medium
希望这能回答您的问题.
Hope this answers your question.
这篇关于网络桥`docker0`在法兰绒的k8s中起什么作用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!