Rabbitmq队列填满了芹菜任务 [英] rabbitmq queues filling up with celery tasks

查看:226
本文介绍了Rabbitmq队列填满了芹菜任务的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用Celery通过其IP地址调用多个硬件单元.每个单位将返回值列表.下面的应用代码

I am using Celery to call multiple hardware units by their ip address. Each unit will return a list of values. Application code below

# create a list of tasks
modbus_calls = []
for site in sites:
    call = call_plc.apply_async((site.name, site.address), expires=120)  # expires after 2 minutes?
    modbus_calls.append(call)

# below checks all tasks are complete (values returned), then move forward out of the while loop 
ready_list = [False]
while not all(ready_list):
    ready_list = []
    for task in modbus_calls:
        ready_list.append(task.ready())

# once here, all tasks have returned their values. use the task.get() method to obtain the list of values

在task.py文件中,call_plc任务定义为

Within the tasks.py file, call_plc task is defined as

@app.task
def call_plc(sitename, ip_address):
    vals = pc.PLC_Comm().connect_to(sitename, ip_address)
    return vals

正在发生的事情:在Rabbitmq开始崩溃(内存不足)之前,我只能运行此应用程序一定次数.我在/var/lib/rabbitmq/mnesia/rabbit@mymachine/queues中查看,看到一堆带有uuid名称的队列.这些uuid名称与任务ID的名称不匹配(从我的应用程序中的print task.id了解).每次我运行该应用程序时,都会将n队列添加到此文件夹中,其中n = number of sites to call.

What is happening: I can only run this application for a certain number of times before rabbitmq starts crashing (running out of memory). I look in /var/lib/rabbitmq/mnesia/rabbit@mymachine/queues and I see a bunch of queues with uuid names. These uuid names do not match the names of the task id's (learned from print task.id within my application). Every time I run the application, there are n queues added to this folder, where n = number of sites to call.

我在重置Rabbitmq后第一次运行该应用程序时,它添加了n+1队列

the first time I run the application after resetting rabbitmq, it adds n+1 queues

如何做到这些,使这些任务/队列不持久?获得结果后,我将不再需要任何任务.

How can I make it so these tasks / queues do not persist? Once I get the results, I no longer need the task in any way.

task.forget()失败,显示NotImplementedError('backend does not implement forget.')

任务到期设置似乎无效.我的celeryconfig文件如下:

The task expiration settings seem to have no effect. My celeryconfig file is below:

BROKER_URL = 'amqp://webdev_rabbit:password@localhost:5672/celeryhost'
CELERY_RESULT_BACKEND = 'amqp://webdev_rabbit:password@localhost:5672/celeryhost'
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_ACCEPT_CONTENT=['json']
CELERY_TIMEZONE = 'Europe/Oslo'
CELERY_ENABLE_UTC = True
CELERY_AMQP_TASK_RESULT_EXPIRES = 120

推荐答案

听起来您不想使用RabbmitMQ作为结果后端,而只是将其用作消息代理.请参阅此先前的问题:在RabbitMQ服务器中生成带有随机GUID的队列

It sounds like you don't want to use RabbmitMQ as a result backend, only as a message broker. See this previous question: Queues with random GUID being generated in RabbitMQ server

这篇关于Rabbitmq队列填满了芹菜任务的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆