如何限制在docker容器中运行的ffmpeg的资源(从python脚本调用)? [英] How do I limit resources for ffmpeg, called from a python-script, running in a docker container?

查看:195
本文介绍了如何限制在docker容器中运行的ffmpeg的资源(从python脚本调用)?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我部署了一项服务,该服务会定期在服务器上进行视频编码;每次这样做,所有其他服务都会大大减慢速度.编码隐藏在多个抽象层下.限制这些层中的任何一层都可以.(例如,限制docker-container和限制ffmpeg-sub进程一样有效.)

I deployed a service, that periodically does video encoding on my server; And every time it does, all other services slow down significantly. The encoding is hidden under multiple layers of abstraction. Limiting any of those layers would be fine. (e.g. limiting the docker-container would work just as well as limiting the ffmpeg-sub process.)

我的堆栈:

  1. VPS(ubuntu:zesty)
  2. docker-compose
  3. docker-container(ubuntu:zesty)
  4. python
  5. ffmpeg(通过python中的subprocess.check_call())

我要限制的内容:

  • CPU:单核
  • RAM:最大2 GB
  • 硬盘:最大4 GB

如果需要,可以重新编译ffmpeg.

It would be possible to recompile ffmpeg if needed.

推荐答案

在普通docker中,您可以使用命令行选项实现每个限制:

In plain docker you can achieve each of the limit with command line options:

容器可以限制为单个CPU内核(或当前intel硬件上的超线程):

A container can be limited to a single CPU core (or a hyperthread on current intel hardware):

docker run \
  --cpus 1 \
  image

或受 Docker CPU份额的限制默认值是1024.这仅在您大多数被减慢的任务也都在Docker容器中时才有帮助,因此也会为它们分配Dockers共享.

or limited by Dockers CPU shares, which default to 1024. This will only help if most of your tasks that are being slowed down are also in Docker containers, so they are being allocated Dockers shares as well.

docker run \
  --cpu-shares 512 \
  image

限制内存有点儿挑剔如果达到极限,则过程只会崩溃.

Limiting memory is a bit finicky as your process will just crash if it hits the limit.

docker run \
  --memory-reservation 2000 \
  --memory 2048 \
  --memory-swap 2048 \
  image

块或设备IO 比总的性能空间更重要.每个设备可能会受到限制,因此,如果您将数据保存在特定设备上进行转换:

Block or Device IO is more important than total space for performance. This can be limited per device, so if you keep data on a specific device for your conversion:

docker run \
  --volume /something/on/sda:/conversion \
  --device-read-bps /dev/sda:2mb \
  --device-read-iops /dev/sda:1024 \
  --device-write-bps /dev/sda:2mb \
  --device-write-iops /dev/sda:1024 \
  image 

如果您还想限制磁盘总使用量,则需要具有

If you want to limit total disk usage as well, you will need to have the correct storage setup. Quotas are supported on the devicemapper, btrfs and zfs storage drivers, and also with the overlay2 driver when used on an xfs file system that is mounted with the pquota option.

docker run \
   --storage-opt size=120G
   image

撰写/服务

Docker compose v3似乎已经将其中一些概念抽象为可以应用于服务/群的内容,因此您将无法获得相同的细粒度控制.

Compose/Service

Docker compose v3 seems to have abstracted some of these concepts away to what can be applied to a service/swarm so you don't get the same fine grained control.

对于v3文件,使用 resources 对象为CPU和内存配置 limits reservations :

For a v3 file, use the resources object to configure limits and reservations for cpu and memory:

services:
  blah:
    image: blah
    deploy:
      resources:
        limits:
          cpu: 1
          memory: 2048M
        reservations:
          memory: 2000M

基于磁盘的限制可能需要支持设置限制的卷驱动程序.

Disk based limits might need a volume driver that supports setting limits.

如果您可以返回到 v2.2撰写文件,您可以使用

If you can go back to a v2.2 Compose file you can use the full range of constraints on a container at the base level of the service which are analogous to the docker run options:

cpu_count cpu_percent cpu_shares cpu_quota cpus cpuset mem_limit memswap_limit mem_swappiness mem_reservation oom_score_adj shm_size

cpu_count, cpu_percent, cpu_shares, cpu_quota, cpus, cpuset, mem_limit, memswap_limit, mem_swappiness, mem_reservation, oom_score_adj, shm_size

这篇关于如何限制在docker容器中运行的ffmpeg的资源(从python脚本调用)?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆