Resque:按用户顺序执行的时间关键型作业 [英] Resque: time-critical jobs that are executed sequentially per user

查看:59
本文介绍了Resque:按用户顺序执行的时间关键型作业的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的应用程序创建了必须按用户顺序处理的 resque 作业,并且应尽快处理这些作业(最大延迟 1 秒).

My application creates resque jobs that must be processed sequentially per user, and they should be processed as fast as possible (1 second maximum delay).

示例:为 user1 创建 job1 和 job2,为 user2 创建 job3.Resque 可以并行处理job1 和job3,但是job1 和job2 应该顺序处理.

An example: job1 and job2 is created for user1 und job3 for user2. Resque can process job1 and job3 in parallel, but job1 and job2 should be processed sequentially.

我对解决方案有不同的想法:

I have different thoughts for a solution:

  • 我可以使用不同的队列(例如 queue_1 ... queue_10)并为每个队列启动一个工作程序(例如 rake resque:work QUEUE=queue_1).用户在运行时(例如登录时、每天等)被分配到队列/工作器
  • 我可以使用动态用户队列"(例如 queue_#{user.id})并尝试扩展 resque,以便一次只有 1 个工作人员可以处理队列(如 Resque:每个队列一名工人)
  • 我可以将作业放入非 resque 队列,并使用带有 resque-lock 的每用户元作业"(https://github.com/defunkt/resque-lock) 处理这些工作.
  • I could use different queues (e.g. queue_1 ... queue_10) and start a worker for each queue (e.g. rake resque:work QUEUE=queue_1). Users are assigned to a queue/ worker at runtime (e.g. on login, every day etc.)
  • I could use dynamic "user queues" (e.g. queue_#{user.id}) and try to extend resque that only 1 worker can process a queue at a time (as asked in Resque: one worker per queue)
  • I could put the jobs in a non-resque queue and use a "per-user meta job" with resque-lock (https://github.com/defunkt/resque-lock) that handles those jobs.

您在实践中对其中一种场景有任何经验吗?或者还有其他值得思考的想法?我将不胜感激任何输入,谢谢!

Do you have any experiences with one of those scenarios in practice? Or do have other other ideas that might be worth thinking about? I would appreciate any input, thank you!

推荐答案

感谢@Isotope 的回答,我终于找到了一个似乎有效的解决方案(在 redis 中使用 resque-retry 和锁:

Thanks to the answer of @Isotope I finally came to a solution that seems to work (using resque-retry and locks in redis:

class MyJob
  extend Resque::Plugins::Retry

  # directly enqueue job when lock occurred
  @retry_delay = 0 
  # we don't need the limit because sometimes the lock should be cleared
  @retry_limit = 10000 
  # just catch lock timeouts
  @retry_exceptions = [Redis::Lock::LockTimeout]

  def self.perform(user_id, ...)
    # Lock the job for given user. 
    # If there is already another job for the user in progress, 
    # Redis::Lock::LockTimeout is raised and the job is requeued.
    Redis::Lock.new("my_job.user##{user_id}", 
      :expiration => 1, 
      # We don't want to wait for the lock, just requeue the job as fast as possible
      :timeout => 0.1
    ).lock do
      # do your stuff here ...
    end
  end
end

我在这里使用 Redis::Lock 来自 https://github.com/nateware/redis-objects(它封装了来自 http://redis.io/commands/setex 的模式.

I am using here Redis::Lock from https://github.com/nateware/redis-objects (it encapsulates the pattern from http://redis.io/commands/setex).

这篇关于Resque:按用户顺序执行的时间关键型作业的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆