Django中带有Redis代理的Celery:任务成功执行,但是仍然存在太多持久性Redis密钥和连接 [英] Celery with Redis broker in Django: tasks successfully execute, but too many persistent Redis keys and connections remain

查看:40
本文介绍了Django中带有Redis代理的Celery:任务成功执行,但是仍然存在太多持久性Redis密钥和连接的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们的Python服务器(Django 1.11.17)使用带有Redis的Celery 4.2.1作为代理(我们使用的pip redis软件包是3.0.1).Django应用程序已部署到Heroku,而Celery经纪人是使用Heroku的Redis Cloud插件设置的.

我们必须完全在一分钟内完成Celery任务(完成时间中位数约为100毫秒),但是我们发现Redis密钥和连接的持久性要长得多(长达24小时).否则,任务将无法正确执行.

会发生什么,导致我们的Redis代理中存在这些持久的密钥和连接?当Celery任务完成时,我们如何清除它们?

这是Redis Labs发生的这种情况的屏幕截图(所有任务都应该已经完成​​,因此我们期望零键和零连接):

解决方案

解决了我自己的问题:如果 CELERY_IGNORE_RESULT 配置变量设置为 True (我是之所以能够做到这一点,是因为我不使用任务中的任何返回值),那么键和连接又回到了控制之下.

来源:芹菜项目文档

Our Python server (Django 1.11.17) uses Celery 4.2.1 with Redis as the broker (the pip redis package we're using is 3.0.1). The Django app is deployed to Heroku, and the Celery broker was set up using Heroku's Redis Cloud add-on.

The Celery tasks we have should definitely have completed within a minute (median completion time is ~100 ms), but we're seeing that Redis keys and connections are persisting for much, much longer than that (up to 24 hours). Otherwise, tasks are being executed correctly.

What can be happening that's causing these persisting keys and connections in our Redis broker? How can we clear them when Celery tasks conclude?

Here's a Redis Labs screenshot of this happening (all tasks should have completed, so we'd expect zero keys and zero connections):

解决方案

Resolved my own question: if the CELERY_IGNORE_RESULT config variable is set to True (which I'm able to do because I don't use any return values from my tasks), then the keys and connections are back under control.

Source: Celery project documentation

这篇关于Django中带有Redis代理的Celery:任务成功执行,但是仍然存在太多持久性Redis密钥和连接的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆