使用Redis时,在Celery中进行可靠的任务处理需要什么? [英] What is required for reliable task processing in Celery when using Redis?

查看:43
本文介绍了使用Redis时,在Celery中进行可靠的任务处理需要什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们希望在kubenetes群集中运行Celery/Redis,并且当前未启用Redis持久性(所有内容均在内存中).我担心的是:Redis重新启动(丢失内存中的数据),工作者重新启动/中断(由于崩溃和/或Pod调度)以及短暂的网络问题.

We are looking to run Celery/Redis in a kubenetes cluster, and currently do not have Redis persistence enabled (everything is in-memory). I am concerned about: Redis restarts (losing in-memory data), worker restarts/outages (due to crashes and/or pod scheduling), and transient network issues.

在使用Celery通过Redis进行任务处理时,需要什么来确保任务可靠?

When using Celery to do task processing using Redis, what is required to ensure that tasks are reliable?

推荐答案

为了在使用Redis作为代理(和结果后端)时使Celery群集更健壮,我建议使用一个(或多个)副本.不幸的是,redis-py还不支持集群式Redis,但这只是时间问题.在复制模式下,当主服务器关闭时,副本将取代它,并且(几乎)是完全透明的.芹菜还支持Redis哨兵.

In order to make your Celery cluster be more robust when using Redis as a broker (and result backend) I recommend using one (or more) replicas. Unfortunately redis-py does not yet have support for clustered Redis, but that is just a matter of time. In the replicated mode, when the master server goes down, replica takes its place and this is (almost) entirely transparent. Celery also supports Redis sentinels.

多年来,在确保在某些紧急情况下重新交付任务方面,Celery变得更加强大.如果由于丢失了工作程序而导致任务失败(有一个配置参数),则引发了某些异常,等等-将重新传送它,然后再次执行.

Celery became much more robust over the years in terms of ensuring that tasks get redelivered in some critical cases. If the task failed because the worker is lost (there is a configuration parameter for it), some exception was thrown, etc - it will be redelivered, and executed again.

这篇关于使用Redis时,在Celery中进行可靠的任务处理需要什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆