CELERYD_CONCURRENCY,-并发和自动缩放 [英] CELERYD_CONCURRENCY, --concurrency and autoscale

查看:41
本文介绍了CELERYD_CONCURRENCY,-并发和自动缩放的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我对任务路由,并发性和性能有一些疑问.这是我的用例:

I have a few questions regarding task routing, concurrency and performance. Here is my use case :

我有一台专用服务器来运行celery任务,因此我可以使用所有CPU在该服务器上运行celery worker.

I've got one dedicated server to run celery tasks, so I can use all the CPUs to run celery workers on this server.

我有很多不同的python任务,我使用:CELERY_ROUTES进行路由,并且由于这些任务执行的是真正不同类型的python代码,因此我创建了5个不同的worker.这些工作程序是在我使用Ansible部署项目时创建的,下面是一个示例:

I have a lot of different python tasks, which I route using : CELERY_ROUTES and because the tasks perform really different types of python code, I created 5 different workers. These worker are created when I deploy my project using ansible, here is an example:

[program:default_queue-celery]
command={{ venv_dir }}/bin/celery worker --app=django_coreapp --loglevel=INFO --concurrency=1 --autoscale=15,10 --queues=default_queue
environment =
    SERVER_TYPE="{{ SERVER_TYPE }}",
    DB_SCHEMA="{{ DB_SCHEMA }}",
    DB_USER="{{ DB_USER }}",
    DB_PASS="{{ DB_PASS }}",
    DB_HOST="{{ DB_HOST }}"
directory={{ git_dir }}
user={{ user }}
group={{ group }}
stdout_logfile={{ log_dir }}/default_queue.log
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=5
redirect_stderr=true
autostart=true
autorestart=true
startsecs=10
killasgroup=true 

在settings.py中也有一个CELERY_QUEUES,可以在CELERY_ROUTES和我的芹菜程序(队列)之间架起桥梁

I have also a CELERY_QUEUES in settings.py to make the bridge between CELERY_ROUTES and my celery programs (Queues)

CELERY_DEFAULT_QUEUE = 'default_queue'

如果碰巧我没有路由任务,它将转到我的'default_queue'

And if it happens that I don't route a task it will go to my 'default_queue'

要给所有队列留出空间,我将default_queue的--concurrency设置为1,将最重要的队列的--concurrency设置为1.

To give space to all of my queues, I set --concurrency to 1 for default_queue, and more for my most important queue.

但是我想知道,AutoScale是否会对与并发相同的值产生影响?意思是,如果我将并发设置为1并将--autoscale设置为15,10(上面的示例)

But I am wondering, does AutoScale have impact on the same value as concurrency ? Meaning, if I set concurrency to 1 and --autoscale to 15,10 (example above)

我的工人会在CPU上工作"并在此CPU上最多处理15个任务吗?还是这意味着完全不同?

Will my worker 'work' on CPU and process a maximum of 15 tasks on this CPU ? Or does this mean something completely different ?

推荐答案

同时设置 concurrency autoscale 都是没有意义的,因为两者都是控制工作人员数量的手段给定工作程序实例的子流程,如此处所述.

It makes no sense setting both concurrency and autoscale since both are means to control the number of worker subprocesses for a given worker instance, as explained here.

-concurrency N 表示您的工作程序实例将有N个工作程序子进程(这意味着工作程序实例可以处理N个并发任务).

--concurrency N means you will have exactly N worker subprocesses for your worker instance (meaning the worker instance can handle N conccurent tasks).

-autoscale max,min 意味着对于给定的工作程序实例,您将至少具有 min 个,并且最多具有 max 个并发工作程序子进程.

--autoscale max, min means you will have at least min and at most max concurrent worker subprocesses for a given worker instance.

每个进程(主工作进程或其子子进程中的任何一个)将在哪个CPU上运行是不可预知的,这是OS方面的事情,但不要假设子进程都将在同一CPU上运行(可能不会)-这实际上是并发子流程的一部分.

On which CPU each process (the main worker process or any of it's child subprocesses) will run is not predictable, it's an OS thing, but do not assume subprocesses will all run on the same CPU (chances are they won't - that's part of the point of having concurrent subprocesses actually).

这篇关于CELERYD_CONCURRENCY,-并发和自动缩放的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆