如何安排芹菜任务 [英] How to structure celery tasks
问题描述
我有两种任务类型:异步任务和计划任务。因此,这是我的目录结构:
I have 2 types of task: async tasks and schedule tasks. So, here is my dir structure:
proj
|
-- tasks
|
-- __init__.py
|
-- celeryapp.py => celery instance defined in this file.
|
-- celeryconfig.py
|
-- async
| |
| -- __init__.py
| |
| -- task1.py => from proj.tasks.celeryapp import celery
| |
| -- task2.py => from proj.tasks.celeryapp import celery
|
-- schedule
|
-- __init__.py
|
-- task1.py => from proj.tasks.celeryapp import celery
|
-- task2.py => from proj.tasks.celeryapp import celery
但是当我像下面那样运行celery worker时,它不起作用。
But when I run celery worker like below, it does not work. It can not accept the task from celery beat scheduler.
$ celery worker --app=tasks -Q my_queue,default_queue
那么,在多任务文件组织上是否有最佳实践?
So, is there any best practice on multiple task files organization?
推荐答案
基于celery 文档,您可以导入诸如此类的celery任务结构:
Based on celery documentation you can import a structure of celery tasks like this:
例如,如果您有一个(想象的)目录树像这样:
For example if you have an (imagined) directory tree like this:
|
|-- foo
| |-- __init__.py
| |-- tasks.py
|
|-- bar
|-- __init__.py
|-- tasks.py
然后调用 app.autodiscover_tasks(['foo',bar'])
将导致模块foo.tasks和bar.tasks被导入。
Then calling app.autodiscover_tasks(['foo', bar'])
will result in the modules foo.tasks and bar.tasks being imported.
这篇关于如何安排芹菜任务的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!