用于 docker 的 Cron 容器 - 它们实际上是如何工作的? [英] Cron containers for docker - how do they actually work?
问题描述
我已经使用 docker 几个月了,并且正在对各种不同的服务器镜像进行 dockerizing.一个一致的问题是许多服务器需要运行 cron 作业.网上对此有很多讨论(包括在 Stackoverflow 上),但我并不完全了解它的机制.
目前,我在每个容器中使用主机的 cron 和 docker exec 来运行脚本.我创建了一个关于脚本名称和位置的约定;我所有的容器都有相同的脚本.这避免了主机的 cron 依赖于容器.
基本上,主机的 cron 每分钟执行一次:
对于每个容器docker exec -it <容器名称>/cronscript/分钟脚本
这可行,但使容器依赖于主机.
我想做的是创建一个 cron 容器,该容器在每个其他容器中启动一个脚本 - 但我不知道从一个容器到另一个容器的等效于docker exec".>
我现在遇到的具体情况是在 MySQL 容器中运行备份,并且运行 Moodle 需要每分钟运行一次的 cron 作业.最终,我需要通过 cron 做一些额外的事情.Moodle 使用命令行 PHP 脚本.
从另一个容器中的一个容器启动脚本的正确"dockerized 方式是什么?
更新:也许它有助于提及我的具体用例,尽管随着时间的推移会有更多.
目前cron需要做的有:
- 从 MySQL 执行数据库转储.我可以通过 cron 容器中的 mysqldump TCP 链接来做到这一点;这里的缺点是我不能将备份用户限制为主机 127.0.0.1.我还可以通过一个卷以某种方式将 MySQL 套接字编入 cron 容器.
- 对 Moodle 安装执行定期维护.Moodle 包含一个运行所有维护任务的 php 命令行脚本.这对我来说很重要.我可能可以在一个卷中运行这个脚本,但是 Moodle 的设计并没有考虑到这种情况,我不排除竞争条件.此外,我不希望将我的moodle 安装在一个卷中,因为它使更新容器变得更加困难(请记住,在Docker 中,当您使用新映像更新容器时,卷不会重新初始化).
- 未来:对我的许多其他服务器进行日常维护,例如清理电子邮件队列等.
我的解决方案是:
- 在容器内安装 crond
- 安装你的软件
- 将 cron 作为守护进程运行
- 运行你的软
我的Dockerfile
来自 debian:jessie运行 mkdir -p/usr/src/app工作目录/usr/src/app复制 .crontab/usr/src/app# 设置时区运行回声欧洲/华沙">/etc/时区&&dpkg-reconfigure --frontend 非交互式 tzdata# Cron,邮件运行设置 -x &&apt-get 更新 &&apt-get install -y cron rsyslog mailutils --no-install-recommends &&rm -rf/var/lib/apt/lists/*CMD rsyslogd &&环境 >/tmp/crontab &&猫.crontab >>/tmp/crontab &&crontab/tmp/crontab &&cron -f
说明
- 设置时区,因为 cron 需要它来正确运行任务
- 安装
cron
包 - 带有 cron 守护进程的包 - 安装
rsyslog
包以记录 cron 任务输出 - 如果您想从 cron 任务发送电子邮件,请安装
mailutils
包 - 运行
rsyslogd
- 将 ENV 变量复制到 tmp 文件,因为 cron 以最小的 ENV 运行任务,而您的任务可能需要访问容器 ENV 变量
- 将您的
.crontab
文件(带有您的任务)附加到 tmp 文件 - 从 tmp 文件设置 root crontab
- 运行 cron 守护进程
我在我的容器中使用它并且效果很好.
一个进程一个容器
如果您喜欢这种范例,那么为每个 cron 任务制作一个 Dockerfile
.例如
Dockerfile
- 主程序Dockerfile_cron_task_1
- cron 任务 1Dockerfile_cron_task_1
- cron 任务 2
并构建所有容器:
docker build -f Dockerfile_cron_task_1 ...
I have been using docker for a couple of months now, and am working on dockerizing various different server images. One consistent issue is that many servers need to run cron jobs. There is a lot of discussion about that online (including on Stackoverflow), but I don't completely understand the mechanics of it.
Currently, I am using the host's cron and docker exec into each container to run a script. I created a convention about the script's name and location; all my containers have the same script. This avoids having the host's cron depending on the containers.
Basically, once a minute, the host's cron does this:
for each container
docker exec -it <containername> /cronscript/minute-script
That works, but makes the containers depend on the host.
What I would like to do is create a cron container that kicks off a script within each of the other containers - but I am not aware of an equivalent to "docker exec" that works from one container to the other.
The specific situations I have right now are running a backup in a MySQL container, and running the cron jobs Moodle requires to be run every minute. Eventually, there will be additional things I need to do via cron. Moodle uses command-line PHP scripts.
What is the "proper" dockerized way to kick off a script from one container in another container?
Update: maybe it helps to mention my specific use cases, although there will be more as time goes on.
Currently, cron needs to do the following:
- Perform a database dump from MySQL. I can do that via mysqldump TCP link from a cron container; the drawback here is that I can't limit the backup user to host 127.0.0.1. I might also be able to somehow finagle the MySQL socket into the cron container via a volume.
- Perform regular maintenance on a Moodle installation. Moodle includes a php command line script that runs all of the maintenance tasks. This is the biggie for me. I can probably run this script through a volume, but Moodle was not designed with that situation in mind, and I would not rule out race conditions. Also, I do not want my moodle installation in a volume because it makes updating the container much harder (remember that in Docker, volumes are not reinitialized when you update the container with a new image).
- Future: perform routine maintenance on a number of other of my servers, such as cleaning out email queues, etc.
My solution is:
- install crond inside container
- install Your soft
- run cron as a daemon
- run Your soft
Part of my Dockerfile
FROM debian:jessie
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY .crontab /usr/src/app
# Set timezone
RUN echo "Europe/Warsaw" > /etc/timezone
&& dpkg-reconfigure --frontend noninteractive tzdata
# Cron, mail
RUN set -x
&& apt-get update
&& apt-get install -y cron rsyslog mailutils --no-install-recommends
&& rm -rf /var/lib/apt/lists/*
CMD rsyslogd && env > /tmp/crontab && cat .crontab >> /tmp/crontab && crontab /tmp/crontab && cron -f
Description
- Set timezone, because cron need this to proper run tasks
- Install
cron
package - package with cron daemon - Install
rsyslog
package to log cron task output - Install
mailutils
package if You want to send e-mails from cron tasks - Run
rsyslogd
- Copy ENV variables to tmp file, because cron run tasks with minimal ENV and You tasks may need access to containers ENV variables
- Append Your
.crontab
file (with Your tasks) to tmp file - Set root crontab from tmp file
- Run cron daemon
I use this in my containers and work very well.
one-process-per-container
If You like this paradigm, then make one Dockerfile
per cron task. e.g.
Dockerfile
- main programDockerfile_cron_task_1
- cron task 1Dockerfile_cron_task_1
- cron task 2
and build all containers:
docker build -f Dockerfile_cron_task_1 ...
这篇关于用于 docker 的 Cron 容器 - 它们实际上是如何工作的?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!