线程安全的备忘录装饰器 [英] A thread-safe memoize decorator

查看:106
本文介绍了线程安全的备忘录装饰器的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试制作一个可与多个线程一起使用的备忘录装饰器.

I'm trying to make a memoize decorator that works with multiple threads.

我知道我需要将缓存用作线程之间的共享对象,并获取/锁定共享对象.我当然正在启动线程:

I understood that I need to use the cache as a shared object between the threads, and acquire/lock the shared object. I'm of course launching the threads:

for i in range(5):
            thread = threading.Thread(target=self.worker, args=(self.call_queue,))
            thread.daemon = True
            thread.start()

工人在哪里:

def worker(self, call):
    func, args, kwargs = call.get()
    self.returns.put(func(*args, **kwargs))
    call.task_done()

当我发送带有备忘录功能的功能时,问题就开始出现了(例如

The problem starts, of course, when I'm sending a function decorated with a memo function (like this) to many threads at the same time.

如何将备忘录的缓存实现为线程之间的共享对象?

How can I implement the memo's cache as a shared object among threads?

推荐答案

最直接的方法是为整个缓存使用一个锁,并要求对缓存的任何写操作都首先要抓住该锁.

The most straightforward way is to employ a single lock for the entire cache, and require that any writes to the cache grab the lock first.

在您发布的示例代码的第31行中,您将获取锁并检查结果是否仍然丢失,在这种情况下,您将继续计算并缓存结果.像这样:

In the example code you posted, at line 31, you would acquire the lock and check to see if the result is still missing, in which case you would go ahead and compute and cache the result. Something like this:

lock = threading.Lock()
...
except KeyError:
    with lock:
        if key in self.cache:
            v = self.cache[key]
        else:
            v = self.cache[key] = f(*args,**kwargs),time.time()

您发布的示例在字典中为每个函数存储了一个缓存,因此您还需要为每个函数存储一个锁.

The example you posted stores a cache per function in a dictionary, so you'd need to store a lock per function as well.

但是,如果您在争吵激烈的环境中使用此代码,那么效率可能会低到无法接受的程度,因为即使线程未在计算相同的内容,它们也必须彼此等待.您可以通过在缓存中为每个键存储一个锁来改善这一点.不过,您还需要全局锁定对锁存储的访问权限,否则在创建每键锁时会出现竞争状况.

If you were using this code in a highly contentious environment, though, it would probably be unacceptably inefficient, since threads would have to wait on each other even if they weren't calculating the same thing. You could probably improve on this by storing a lock per key in your cache. You'll need to globally lock access to the lock storage as well, though, or else there's a race condition in creating the per-key locks.

这篇关于线程安全的备忘录装饰器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆