你如何运行与AWS弹性魔豆工人? [英] How do you run a worker with AWS Elastic Beanstalk?

查看:172
本文介绍了你如何运行与AWS弹性魔豆工人?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我发起的AWS Django应用程序弹性魔豆。我想,为了以运行芹菜运行后台任务或工作。

I am launching a django application on aws elastic beanstalk. I'd like to run background task or worker in order order to run celery.

我找不到,如果它是可以或不可以。如果是的话如何可能实现的。

I can not find if it is possible or not. If yes how it could be achieve.

下面是我在做什么,但现在这是制作每一次的事件类型的错误。

Here is what I am doing right now but this is producing every time an event type error.

container_commands:
  01_syncdb:
    command: "django-admin.py syncdb --noinput"
    leader_only: true
  50_sqs_email:
    command: "./manage.py celery worker --loglevel=info"
    leader_only: true

感谢您的帮助,

Thanks for the help,

推荐答案

由于@克里斯 - wheadon在他的评论暗示,你应该尝试运行芹菜作为后台守护进程。 AWS弹性魔豆使用 supervisord 已经运行一些守护进程的进程。所以,你可以利用这些运行celeryd,避免创建自定义AMI这一点。它可以很好地适合我。

As @chris-wheadon suggested in his comment, you should try to run celery as a deamon in the background. AWS Elastic Beanstalk uses supervisord already to run some deamon processes. So you can leverage that to run celeryd and avoid creating a custom AMI for this. It works nicely for me.

我做的是后应用程序部署到它通过EB一个celeryd配置文件编程添加到实例。最棘手的部分是,该文件需要(如果你在你的应用程序中使用S3或其他服务,例如AWS访问密钥)来设置这个守护进程所需的环境变量。

What I do is to programatically add a celeryd config file to the instance after the app is deployed to it by EB. The tricky part is that the file needs to set the required environmental variables for the deamon (such as AWS access keys if you use S3 or other services in your app).

下面还有就是我用脚本的副本,这个脚本添加到您的 .ebextensions 文件夹配置您的EB环境。

Below there is a copy of the script that I use, add this script to your .ebextensions folder that configures your EB environment.

设置脚本会在无证的/ opt / elasticbeanstalk /钩/ appdeploy /后/ 文件夹中,生活在所有EB实例文件。任何shell脚本,在那里将被执行岗位部署。该shell脚本是摆在那里的工作原理如下:

The setup script creates a file in the undocumented /opt/elasticbeanstalk/hooks/appdeploy/post/ folder that lives on all EB instances. Any shell script in there will be executed post deployment. The shell script that is placed there works as follows:

  1. celeryenv 变量中,virutalenv环境存储在 后面的supervisord符号的格式。这是一个逗号 ENV变量的分隔列表。
  2. 然后该脚本创建 celeryconf 包含一个变量 配置文件作为一个字符串,其中包含了previously解析 ENV变量。
  3. 在该变量,然后输送到一个名为 celeryd.conf ,一 supervisord配置文件的芹菜守护进程。
  4. 最后,路径到新创建的配置文件被添加到 主 supervisord.conf 文件,如果它已不存在。
  1. In the celeryenv variable, the virutalenv environment is stored in a format that follows the supervisord notation. This is a comma separated list of env variables.
  2. Then the script creates a variable celeryconf that contains the configuration file as a string, which includes the previously parsed env variables.
  3. This variable is then piped into a file called celeryd.conf, a supervisord configuration file for the celery daemon.
  4. Finally, the path to the newly created config file is added to the main supervisord.conf file, if it is not already there.

下面是脚本的副本:

files:
  "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh":
    mode: "000755"
    owner: root
    group: root
    content: |
      #!/usr/bin/env bash

      # Get django environment variables
      celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
      celeryenv=${celeryenv%?}

      # Create celery configuraiton script
      celeryconf="[program:celeryd]
      ; Set full path to celery program if using virtualenv
      command=/opt/python/run/venv/bin/celery worker -A myappname --loglevel=INFO

      directory=/opt/python/current/app
      user=nobody
      numprocs=1
      stdout_logfile=/var/log/celery-worker.log
      stderr_logfile=/var/log/celery-worker.log
      autostart=true
      autorestart=true
      startsecs=10

      ; Need to wait for currently executing tasks to finish at shutdown.
      ; Increase this if you have very long running tasks.
      stopwaitsecs = 600

      ; When resorting to send SIGKILL to the program to terminate it
      ; send SIGKILL to its whole process group instead,
      ; taking care of its children as well.
      killasgroup=true

      ; if rabbitmq is supervised, set its priority higher
      ; so it starts first
      priority=998

      environment=$celeryenv"

      # Create the celery supervisord conf script
      echo "$celeryconf" | tee /opt/python/etc/celery.conf

      # Add configuration script to supervisord conf (if not there already)
      if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
          then
          echo "[include]" | tee -a /opt/python/etc/supervisord.conf
          echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
      fi

      # Reread the supervisord config
      supervisorctl -c /opt/python/etc/supervisord.conf reread

      # Update supervisord in cache without restarting all services
      supervisorctl -c /opt/python/etc/supervisord.conf update

      # Start/Restart celeryd through supervisord
      supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd

这篇关于你如何运行与AWS弹性魔豆工人?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆