kubernetes容器中的容器是同一cgroup的一部分吗? [英] Are the container in a kubernetes pod part of same cgroup?

查看:201
本文介绍了kubernetes容器中的容器是同一cgroup的一部分吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在多容器Kubernetes容器中,容器是同一个cgroup的一部分(以及容器)还是为每个容器创建一个单独的cgroup。

In a multi-container Kubernetes pod, are the containers part of the same cgroup (along with pod) or a separate cgroup is created for each container.

推荐答案

Cgroups


容器中的容器共享cgroup层次结构的一部分,但是每个容器都拥有它自己的cgroup。我们可以尝试一下并验证自己。

Cgroups

Container in a pod share part of cgroup hierarchy but each container get's it's own cgroup. We can try this out and verify ourself.


  1. 启动多容器吊舱。


# cat mc2.yaml
apiVersion: v1
kind: Pod
metadata:
  name: two-containers
spec:
  restartPolicy: Never
  containers:
  - name: container1
    image: ubuntu
    command: [ "/bin/bash", "-c", "--" ]
    args: [ "while true; do sleep 30; done;" ]

  - name: container2
    image: ubuntu
    command: [ "/bin/bash", "-c", "--" ]
    args: [ "while true; do sleep 30; done;" ]


# kubectl apply -f mc2.yaml
pod/two-containers created



  1. 在主机上查找进程cgroups


# ps -ax | grep while | grep -v grep
19653 ?        Ss     0:00 /bin/bash -c -- while true; do sleep 30; done;
19768 ?        Ss     0:00 /bin/bash -c -- while true; do sleep 30; done;


# cat /proc/19653/cgroup
12:hugetlb:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
11:memory:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
10:perf_event:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
9:freezer:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
8:cpuset:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
7:net_cls,net_prio:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
6:cpu,cpuacct:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
5:blkio:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
4:pids:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
3:devices:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
2:rdma:/
1:name=systemd:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
0::/


# cat /proc/19768/cgroup
12:hugetlb:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/e10fa18a63cc26de27f3f79f46631cd814efa3ef7c2f5ace4b84cf5abce89765
11:memory:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/e10fa18a63cc26de27f3f79f46631cd814efa3ef7c2f5ace4b84cf5abce89765
10:perf_event:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/e10fa18a63cc26de27f3f79f46631cd814efa3ef7c2f5ace4b84cf5abce89765
9:freezer:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/e10fa18a63cc26de27f3f79f46631cd814efa3ef7c2f5ace4b84cf5abce89765
8:cpuset:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/e10fa18a63cc26de27f3f79f46631cd814efa3ef7c2f5ace4b84cf5abce89765
7:net_cls,net_prio:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/e10fa18a63cc26de27f3f79f46631cd814efa3ef7c2f5ace4b84cf5abce89765
6:cpu,cpuacct:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/e10fa18a63cc26de27f3f79f46631cd814efa3ef7c2f5ace4b84cf5abce89765
5:blkio:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/e10fa18a63cc26de27f3f79f46631cd814efa3ef7c2f5ace4b84cf5abce89765
4:pids:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/e10fa18a63cc26de27f3f79f46631cd814efa3ef7c2f5ace4b84cf5abce89765
3:devices:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/e10fa18a63cc26de27f3f79f46631cd814efa3ef7c2f5ace4b84cf5abce89765
2:rdma:/
1:name=systemd:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/e10fa18a63cc26de27f3f79f46631cd814efa3ef7c2f5ace4b84cf5abce89765
0::/

您可以看到豆荚中的容器共享cgroup层次结构,直到 / kubepods / besteffort / poda9c80282-3f6b-4d5b-84d5-a137a6668011 ,然后他们得到了自己的cgroup。 (这些容器在<$ c $以下c> besteffort cgroup,因为我们没有指定资源请求

As you can see the containers in the pods share the cgroup hierarchy until /kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011 and then they get their own cgroup. (These containers are under besteffort cgroup because we have not specified the resource requests)

容器在其自己的cgroup中运行的另一个线索是,kubernetes允许您设置容器级别的资源请求。

您还可以通过登录容器并查看/ proc / self / cgroup文件来找到容器的cgroup。 (如果启用了cgroup命名空间,则在最新版本的kubernetes中可能不起作用)

You can also find the cgroups of the container by logging into the container and viewing /proc/self/cgroup file. (This may not work in recent versions of kubernetes if cgroup namespace is enabled)

# kubectl exec -it two-containers -c container2 bash
# root@two-containers:# cat /proc/self/cgroup
12:hugetlb:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
11:memory:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
10:perf_event:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
9:freezer:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
8:cpuset:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
7:net_cls,net_prio:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
6:cpu,cpuacct:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
5:blkio:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
4:pids:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
3:devices:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
2:rdma:/
1:name=systemd:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
0::/




命名空间


pod中的容器默认还共享网络和IPC命名空间。


Namespaces

Containers in pod also share network and IPC namespaces by default.

# cd /proc/19768/ns/
# /proc/19768/ns# ls -lrt
total 0
lrwxrwxrwx 1 root root 0 Jul  4 01:41 uts -> uts:[4026536153]
lrwxrwxrwx 1 root root 0 Jul  4 01:41 user -> user:[4026531837]
lrwxrwxrwx 1 root root 0 Jul  4 01:41 pid_for_children -> pid:[4026536154]
lrwxrwxrwx 1 root root 0 Jul  4 01:41 pid -> pid:[4026536154]
lrwxrwxrwx 1 root root 0 Jul  4 01:41 net -> net:[4026536052]
lrwxrwxrwx 1 root root 0 Jul  4 01:41 mnt -> mnt:[4026536152]
lrwxrwxrwx 1 root root 0 Jul  4 01:41 ipc -> ipc:[4026536049]
lrwxrwxrwx 1 root root 0 Jul  4 01:41 cgroup -> cgroup:[4026531835]


# cd /proc/19653/ns
# /proc/19653/ns# ls -lrt
total 0
lrwxrwxrwx 1 root root 0 Jul  4 01:42 uts -> uts:[4026536150]
lrwxrwxrwx 1 root root 0 Jul  4 01:42 user -> user:[4026531837]
lrwxrwxrwx 1 root root 0 Jul  4 01:42 pid_for_children -> pid:[4026536151]
lrwxrwxrwx 1 root root 0 Jul  4 01:42 pid -> pid:[4026536151]
lrwxrwxrwx 1 root root 0 Jul  4 01:42 net -> net:[4026536052]
lrwxrwxrwx 1 root root 0 Jul  4 01:42 mnt -> mnt:[4026536149]
lrwxrwxrwx 1 root root 0 Jul  4 01:42 ipc -> ipc:[4026536049]
lrwxrwxrwx 1 root root 0 Jul  4 01:42 cgroup -> cgroup:[4026531835]

您可以看到容器共享网络和IPC名称空间。您还可以使用pod规范中的 shareProcessNamespace 字段使容器共享pid命名空间。

As you can see the containers share the network and IPC namespaces. You can also make the container share pid namespace using shareProcessNamespace field in the pod spec.

https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace


cgroup:[4026531835]对于两个容器而言都是相同的。 (cgroup名称空间)是否不同于它们(容器)所属的cgroup。

cgroup:[4026531835] is same for both the containers. Is this(cgroup namespace) different from the cgroups they (containers) are part of.

cgroups限制了进程使用的资源(cpu,内存等) (或一组进程)可以使用。

cgroups limits the resources(cpu, memory etc) which a process(or group of processes) can use.

命名空间隔离并限制一个(或一组进程)对网络,进程树等系统资源的可见性。组,例如网络,IPC等。此类名称空间之一是cgroup名称空间。使用cgroup命名空间,您可以限制某个进程(或一组进程)中其他cgroup的可见性

namespaces isolate and limit the visibility a process(or a group of processes) has over system resources like network, process trees etc. There are different namespace groups like network, IPC etc. One of such namespace is cgroup namespace. Using cgroup namespace you can limit the visibility of other cgroups from a process(or group of processes)

cgroup命名空间虚拟化了某个进程的cgroup的视图。当前,如果您在容器中尝试 cat / proc / self / cgroup ,则可以查看从全局cgroup根目录开始的完整cgroup层次结构。使用cgroup命名空间可以避免这种情况,可以从 kubernetes v1.19 获得。 Docker也从版本20.03开始支持此功能。在创建容器时使用cgroup名称空间时,您会在容器内部看到cgroup根为 / ,而不是看到全局cgroup层次结构。

cgroup namespace virtualises the view of a process's cgroups. Currently if you try cat /proc/self/cgroup from within the container, you would be able to see the full cgroup hierarchy starting from the global cgroup root. This can be avoided using cgroup namespaces and is available from kubernetes v1.19. Docker also supports this from version 20.03. When cgroup namespace is used while creating the container, you would see the cgroup root as / inside the container instead of seeing the global cgroups hierarchy.

https://man7.org/linux/man-pages /man7/cgroup_namespaces.7.html

这篇关于kubernetes容器中的容器是同一cgroup的一部分吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆