使用环境变量在django上的AWS Elastic Beanstalk上运行celery [英] Run celery with django on AWS Elastic Beanstalk using environment variables
问题描述
我想使用我的Django应用在AWS Elastic Beanstalk上运行celery.我遵循了@yellowcap(使用AWS Elastic Beanstalk来运行工作人员?).所以我的supervisord.conf
看起来像这样:
I want to run celery on AWS Elastic Beanstalk with my Django app. I followed this great answer of @yellowcap (How do you run a worker with AWS Elastic Beanstalk?). So my supervisord.conf
looks like this :
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
# Get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
celeryenv=${celeryenv%?}
# Create celery configuraiton script
celeryconf="[program:celeryd]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A myappname --loglevel=INFO
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv"
# Create the celery supervisord conf script
echo "$celeryconf" | tee /opt/python/etc/celery.conf
# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | tee -a /opt/python/etc/supervisord.conf
echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
fi
# Reread the supervisord config
supervisorctl -c /opt/python/etc/supervisord.conf reread
# Update supervisord in cache without restarting all services
supervisorctl -c /opt/python/etc/supervisord.conf update
# Start/Restart celeryd through supervisord
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd
他的代码运行良好,直到我决定将我settings.py中的某些变量迁移到我的Elastic Beanstalk环境属性中为止.
His code worked fine until I decide to migrate some variables which were in my settings.py to my Elastic Beanstalk environment properties.
实际上,调用脚本时出现以下错误:
Indeed, I have the following error when the script is called :
for \'environment\' is badly formatted'>: file: /usr/lib64/python2.7/xmlrpclib.py line: 800
celeryd: ERROR (no such process)
感谢您的帮助.
推荐答案
这是由于Supervisor如何解析配置文件[1].
This is due to how Supervisor parses config files [1].
您的环境设置包含一个未转义的%字符,可能来自Django SECRET_KEY.
Your environment setting contains an unescaped % character, probably from Django SECRET_KEY.
以下内容对我有用-尝试在此处将| sed 's/%/%%/g'
附加到管道链:
The following worked for me - try appending | sed 's/%/%%/g'
to pipe chain here:
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
结果行:
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
[1] https://github.com/Supervisor/supervisor/issues/291
这篇关于使用环境变量在django上的AWS Elastic Beanstalk上运行celery的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!