如何在openshift 3上用django运行芹菜 [英] how to run celery with django on openshift 3

查看:126
本文介绍了如何在openshift 3上用django运行芹菜的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在我的django吊舱中启动芹菜节拍和工作程序的最简单方法是什么?

What is the easiest way to launch a celery beat and worker process in my django pod?

我正在将Openshift v2 Django应用迁移到Openshift v3.我正在使用专业版订阅.我真的是Openshift v3和docker以及容器和kubernetes的菜鸟.我已使用本教程 https://blog.openshift.com/migrating- django-applications-openshift-3/迁移我的应用程序(效果很好).

I'm migrating my Openshift v2 Django app to Openshift v3. I'm using Pro subscription. I'm really a noob on Openshift v3 and docker and containers and kubernetes. I have used this tutorial https://blog.openshift.com/migrating-django-applications-openshift-3/ to migrate my app (which works pretty well).

我现在正在努力开始做芹菜.在Openshift 2上,我只是使用了一个动作钩子post_start:

I'm now struggling on how to start celery. On Openshift 2 I just used an action hook post_start:

source $OPENSHIFT_HOMEDIR/python/virtenv/bin/activate

python $OPENSHIFT_REPO_DIR/wsgi/podpub/manage.py celery worker\
--pidfile="$OPENSHIFT_DATA_DIR/celery/run/%n.pid"\
--logfile="$OPENSHIFT_DATA_DIR/celery/log/%n.log"\

python $OPENSHIFT_REPO_DIR/wsgi/podpub/manage.py celery beat\
--pidfile="$OPENSHIFT_DATA_DIR/celery/run/celeryd.pid"\
--logfile="$OPENSHIFT_DATA_DIR/celery/log/celeryd.log" &
-c 1\
--autoreload &

这是一个非常简单的设置.它只是将django数据库用作消息代理.没有RabbitMQ之类的东西.

It is a quite simple setup. It just uses the django database as a message broker. No rabbitMQ or something.

是否应该为此安排一个开放式的工作"?或者更好地使用powershift图片( https://pypi.python.org/pypi/powershift-image )动作命令?但是我不知道如何执行它们.

Would a openshift "job" be appropriated for that? Or better use powershift image (https://pypi.python.org/pypi/powershift-image) action commands? But I did not understand how to execute them.

这是我唯一的应用程序"

here is the current deployment configuration for my only app "

apiVersion: v1
kind: DeploymentConfig
metadata:
  annotations:
    openshift.io/generated-by: OpenShiftNewApp
  creationTimestamp: 2017-12-27T22:58:31Z
  generation: 67
  labels:
    app: django
  name: django
  namespace: myproject
  resourceVersion: "68466321"
  selfLink: /oapi/v1/namespaces/myproject/deploymentconfigs/django
  uid: 64600436-ab49-11e7-ab43-0601fd434256
spec:
  replicas: 1
  selector:
    app: django
    deploymentconfig: django
  strategy:
    activeDeadlineSeconds: 21600
    recreateParams:
      timeoutSeconds: 600
    resources: {}
    rollingParams:
      intervalSeconds: 1
      maxSurge: 25%
      maxUnavailable: 25%
      timeoutSeconds: 600
      updatePeriodSeconds: 1
    type: Recreate
  template:
    metadata:
      annotations:
    openshift.io/generated-by: OpenShiftNewApp
      creationTimestamp: null
      labels:
    app: django
    deploymentconfig: django
    spec:
      containers:
      - image: docker-registry.default.svc:5000/myproject/django@sha256:6a0caac773acc65daad2e6ac87695f9f01ae3c99faba14536e0ec2b65088c808
    imagePullPolicy: Always
    name: django
    ports:
    - containerPort: 8080
      protocol: TCP
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /opt/app-root/src/data
      name: data
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: data
    persistentVolumeClaim:
      claimName: django-data
  test: false
  triggers:
  - type: ConfigChange
  - imageChangeParams:
      automatic: true
      containerNames:
      - django
      from:
    kind: ImageStreamTag
    name: django:latest
    namespace: myproject
      lastTriggeredImage: docker-registry.default.svc:5000/myproject/django@sha256:6a0caac773acc65daad2e6ac87695f9f01ae3c99faba14536e0ec2b65088c808
    type: ImageChange

我正在使用mod_wsgi-express,这是我的app.sh

I'm using mod_wsgi-express and this is my app.sh

ARGS="$ARGS --log-to-terminal"
ARGS="$ARGS --port 8080"
ARGS="$ARGS --url-alias /static wsgi/static"

exec mod_wsgi-express start-server $ARGS wsgi/application

非常感谢您的帮助.谢谢

Help is very appreciated. Thank you

推荐答案

虽然我不太满意,但我设法使其正常运行.我将很快移至PostgreSQL数据库.这是我所做的:

I have managed to get it working, though I'm not quite happy with it. I will move to a postgreSQL database very soon. Here is what I did:

wsgi_mod-express具有一个称为service-script的选项,该选项会在实际应用程序之外启动其他进程.所以我更新了app.sh:

wsgi_mod-express has an option called service-script which starts an additional process besides the actual app. So I updated my app.sh:

#!/bin/bash

ARGS=""

ARGS="$ARGS --log-to-terminal"
ARGS="$ARGS --port 8080"
ARGS="$ARGS --url-alias /static wsgi/static"
ARGS="$ARGS --service-script celery_starter scripts/startCelery.py"

exec mod_wsgi-express start-server $ARGS wsgi/application

注意最后的ARGS = ...行.

mind the last ARGS=... line.

我创建了一个python脚本,用于启动我的celery worker并跳动. startCelery.py:

I created a python script that starts up my celery worker and beat. startCelery.py:

import subprocess

OPENSHIFT_REPO_DIR="/opt/app-root/src"

OPENSHIFT_DATA_DIR="/opt/app-root/src/data"

pathToManagePy=OPENSHIFT_REPO_DIR + "/wsgi/podpub"

worker_cmd = [
    "python",
    pathToManagePy + "/manage.py",
    "celery",
    "worker",
    "--pidfile="+OPENSHIFT_REPO_DIR+"/%n.pid",
    "--logfile="+OPENSHIFT_DATA_DIR+"/celery/log/%n.log",
    "-c 1",
    "--autoreload"
    ]
print(worker_cmd)


subprocess.Popen(worker_cmd, close_fds=True)

beat_cmd = [
    "python",
    pathToManagePy + "/manage.py",
    "celery",
    "beat",
    "--pidfile="+OPENSHIFT_REPO_DIR+"/celeryd.pid",
    "--logfile="+OPENSHIFT_DATA_DIR+"/celery/log/celeryd.log",
    ]
print(beat_cmd)

subprocess.Popen(beat_cmd)

这实际上是有效的,但是当我尝试发射芹菜工人时,我一直收到一条消息,说 "当工作程序接受使用pickle序列化的消息时,以超级用户权限运行工作程序是一个非常糟糕的主意! 如果您确实要继续,则必须设置C_FORCE_ROOT环境变量(但请在使用前先考虑一下)."

this was actually working, but I kept receiving a message when I tried to launch the celery worker saying "Running a worker with superuser privileges when the worker accepts messages serialized with pickle is a very bad idea! If you really want to continue then you have to set the C_FORCE_ROOT environment variable (but please think about this before you do)."

虽然我将这些配置添加到我的settings.py中,以删除泡菜序列化器,但它始终给我同样的错误消息.

Eventhough I added these configurations to my settings.py in order to remove pickle serializer, it kept giving me that same error message.

CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_ACEEPT_CONTENT = ['json']

我不知道为什么. 最后,我将C_FORCE_ROOT添加到了.s2i/enviroment

I don't know why. At the end I added C_FORCE_ROOT to my .s2i/enviroment

C_FORCE_ROOT=true

现在它正在工作,至少我认为是这样.我的下一份工作只会在几个小时内完成.我仍然欢迎任何进一步的建议和小费.

Now it's working, at least I thinks so. My next job will only run in some hours. I'm still open for any further suggestions and tipps.

这篇关于如何在openshift 3上用django运行芹菜的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆