在Elastic Beanstalk(AWS)中守护Celerybeat [英] Daemonize Celerybeat in Elastic Beanstalk(AWS)

查看:93
本文介绍了在Elastic Beanstalk(AWS)中守护Celerybeat的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在Elastic beanstalk中将celerybeat作为守护程序运行.这是我的配置文件:

I am trying to run celerybeat as a daemon in Elastic beanstalk. Here is my config file:

files:
"/opt/python/log/django.log":
mode: "000666"
owner: ec2-user
group: ec2-user
content: |
  # Log file
encoding: plain
"/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh":
mode: "000755"
owner: root
group: root
content: |
  #!/usr/bin/env bash
  # Get django environment variables
  celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
  celeryenv=${celeryenv%?}

  # Create celery configuraiton script
  celeryconf="[program:celeryd]
  ; Set full path to celery program if using virtualenv
  command=/opt/python/run/venv/bin/celery worker -A avtotest --loglevel=INFO

  directory=/opt/python/current/app
  user=nobody
  numprocs=1
  stdout_logfile=/var/log/celery-worker.log
  stderr_logfile=/var/log/celery-worker.log
  autostart=true
  autorestart=true
  startsecs=10

  ; Need to wait for currently executing tasks to finish at shutdown.
  ; Increase this if you have very long running tasks.
  stopwaitsecs = 600

  ; When resorting to send SIGKILL to the program to terminate it
  ; send SIGKILL to its whole process group instead,
  ; taking care of its children as well.
  killasgroup=true

  ; if rabbitmq is supervised, set its priority higher
  ; so it starts first
  priority=998

  environment=$celeryenv"

  # Create celerybeat configuraiton script
  celerybeatconf="[program:celerybeat]
  ; Set full path to celery program if using virtualenv
  command=/opt/python/run/venv/bin/celery beat -A avtotest --loglevel=INFO

  ; remove the -A avtotest argument if you are not using an app instance

  directory=/opt/python/current/app
  user=nobody
  numprocs=1
  stdout_logfile=/var/log/celerybeat.log
  stderr_logfile=/var/log/celerybeat.log
  autostart=true
  autorestart=true
  startsecs=10

  ; Need to wait for currently executing tasks to finish at shutdown.
  ; Increase this if you have very long running tasks.
  stopwaitsecs = 600

  ; When resorting to send SIGKILL to the program to terminate it
  ; send SIGKILL to its whole process group instead,
  ; taking care of its children as well.
  killasgroup=true

  ; if rabbitmq is supervised, set its priority higher
  ; so it starts first
  priority=999

  environment=$celeryenv"

  # Create the celery and beat supervisord conf script
  echo "$celeryconf" | tee /opt/python/etc/celery.conf
  echo "$celerybeatconf" | tee /opt/python/etc/celerybeat.conf

  # Add configuration script to supervisord conf (if not there already)
  if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
      then
      echo "[include]" | tee -a /opt/python/etc/supervisord.conf
      echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
      echo "files: celerybeat.conf" | tee -a /opt/python/etc/supervisord.conf
  fi

  # Reread the supervisord config
  supervisorctl -c /opt/python/etc/supervisord.conf reread

  # Update supervisord in cache without restarting all services
  supervisorctl -c /opt/python/etc/supervisord.conf update

  # Start/Restart celeryd through supervisord
  supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd

此文件同时守护芹菜和芹菜节拍.芹菜运转良好.但是芹菜不是.我看不到创建celerybeat.log文件,该文件表明celerybeat无法正常工作.

This file daemonizes both celery and celerybeat. Celery is working fine. But celerybeat is not. I don't see celerybeat.log file created which I think suggests that celerybeat is not working.

对此有什么想法吗?

如果需要,我将发布更多代码.感谢您的帮助

I will post more code if needed. Thanks for help

推荐答案

您的受监管语法有些偏离,首先您可能需要SSH进入实例,然后直接编辑supervisord.conf文件(vim/opt/python/etc/supervisord.conf),然后直接修复此行.

Your supervisord syntax is a bit off, first of all you may need to SSH into your instance, and edit the supervisord.conf file directly (vim /opt/python/etc/supervisord.conf), and fix this line directly.

echo "[include]" | tee -a /opt/python/etc/supervisord.conf
echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
echo "files: celerybeat.conf" | tee -a /opt/python/etc/supervisord.conf

应该是

echo "[include]" | tee -a /opt/python/etc/supervisord.conf
echo "files: celery.conf celerybeat.conf" | tee -a /opt/python/etc/supervisord.conf

要运行celerybeat,并确保它仅在所有计算机上运行一次,则应将这些行放在配置文件中-

To run celerybeat, and make sure that it only runs ONCE on all your machines, you should place these lines in your config files --

04_killotherbeats:
  command: "ps auxww | grep 'celery beat' | awk '{print $2}' | sudo xargs kill -9 || true"
05_restartbeat:
  command: "supervisorctl -c /opt/python/etc/supervisord.conf restart celerybeat"
  leader_only: true

这篇关于在Elastic Beanstalk(AWS)中守护Celerybeat的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆