Docker 主机上运行的容器数量是否有上限? [英] Is there a maximum number of containers running on a Docker host?

查看:324
本文介绍了Docker 主机上运行的容器数量是否有上限?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

基本上,标题说明了一切:在单个 Docker 主机上同时运行的容器数量是否有限制?

Basically, the title says it all: Is there any limit in the number of containers running at the same time on a single Docker host?

推荐答案

您可能会遇到(并解决)许多系统限制,但存在大量灰色区域,具体取决于

There are a number of system limits you can run into (and work around) but there's a significant amount of grey area depending on

  1. 您如何配置 Docker 容器.
  2. 您在容器中运行的内容.
  3. 您使用的是什么内核、发行版和 Docker 版本.

下图来自基于 Tiny Core Linux 7 的 boot2docker 1.11.1 vm 镜像.内核是4.4.8

The figures below are from the boot2docker 1.11.1 vm image which is based on Tiny Core Linux 7. The kernel is 4.4.8

Docker 创建或使用大量资源来运行容器,这取决于您在容器内运行的内容.

Docker creates or uses a number of resources to run a container, on top of what you run inside the container.

  • 将虚拟以太网适配器连接到 docker0 网桥(每个网桥最多 1023 个)
  • 挂载 AUFS 和 shm 文件系统(每个 fs 类型最多挂载 1048576 次)
  • 在图像顶部创建一个 AUFS 层(最多 127 层)
  • 分叉 1 个额外的 docker-containerd-shim 管理进程(平均每个容器约 3MB,sysctl kernel.pid_max)
  • 用于管理容器的 Docker API/守护进程内部数据.(每个容器约 400k)
  • 创建内核cgroup和命名空间
  • 打开文件描述符(启动时每个正在运行的容器约 15 + 1 个.ulimit -nsysctl fs.file-max)
  • Attaches a virtual ethernet adaptor to the docker0 bridge (1023 max per bridge)
  • Mounts an AUFS and shm file system (1048576 mounts max per fs type)
  • Create's an AUFS layer on top of the image (127 layers max)
  • Forks 1 extra docker-containerd-shim management process (~3MB per container on avg and sysctl kernel.pid_max)
  • Docker API/daemon internal data to manage container. (~400k per container)
  • Creates kernel cgroups and name spaces
  • Opens file descriptors (~15 + 1 per running container at startup. ulimit -n and sysctl fs.file-max )
  • 端口映射 -p 将在主机上为每个端口号运行一个额外的进程(在 1.12 之前的 avg 每个端口约 4.5MB,每个端口约 300k > 1.12 以及 sysctl 内核.pid_max)
  • --net=none--net=host 将消除网络开销.
  • Port mapping -p will run a extra process per port number on the host (~4.5MB per port on avg pre 1.12, ~300k per port > 1.12 and also sysctl kernel.pid_max)
  • --net=none and --net=host would remove the networking overheads.

总体限制通常取决于您在容器内运行的内容而不是 docker 的开销(除非您正在做一些深奥的事情,例如测试您可以运行多少个容器:)

The overall limits will normally be decided by what you run inside the containers rather than dockers overhead (unless you are doing something esoteric, like testing how many containers you can run :)

如果您在虚拟机(node、ruby、python、java)中运行应用程序,内存使用可能会成为您的主要问题.

If you are running apps in a virtual machine (node,ruby,python,java) memory usage is likely to become your main issue.

跨 1000 个进程的 IO 会导致大量 IO 争用.

IO across a 1000 processes would cause a lot of IO contention.

尝试同时运行 1000 个进程会导致大量上下文切换(有关垃圾收集,请参阅上面的 vm 应用程序)

1000 processes trying to run at the same time would cause a lot of context switching (see vm apps above for garbage collection)

如果您从 1000 个容器创建网络连接,主机网络层将得到锻炼.

If you create network connections from a 1000 containers the hosts network layer will get a workout.

调优 Linux 主机以运行 1000 个进程并没有太大区别,只是要包含一些额外的 Docker 开销.

It's not much different to tuning a linux host to run a 1000 processes, just some additional Docker overheads to include.

运行 nc -l -p 80 -e echo host 的 1023 个 Docker busybox 映像占用了大约 1GB 的内核内存和 3.5GB 的系统内存.

1023 Docker busybox images running nc -l -p 80 -e echo host uses up about 1GB of kernel memory and 3.5GB of system memory.

在主机上运行的 1023 个普通 nc -l -p 80 -e echo host 进程使用大约 75MB 的内核内存和 125MB 的系统内存

1023 plain nc -l -p 80 -e echo host processes running on a host uses about 75MB of kernel memory and 125MB of system memory

连续启动 1023 个容器大约需要 8 分钟.

Starting 1023 containers serially took ~8 minutes.

连续杀死 1023 个容器需要大约 6 分钟

Killing 1023 containers serially took ~6 minutes

这篇关于Docker 主机上运行的容器数量是否有上限?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆