如何获取functools.lru_cache以返回新实例? [英] How to get functools.lru_cache to return new instances?

查看:51
本文介绍了如何获取functools.lru_cache以返回新实例?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用Python的 lru_cache 在返回可变对象的函数上,如下所示:

I use Python's lru_cache on a function which returns a mutable object, like so:

import functools

@functools.lru_cache()
def f():
    x = [0, 1, 2]  # Stand-in for some long computation
    return x

如果我调用此函数,则对结果进行变异并再次调用,则不会获得新鲜的"未变异的对象:

If I call this function, mutate the result and call it again, I do not obtain a "fresh", unmutated object:

a = f()
a.append(3)
b = f()
print(a)  # [0, 1, 2, 3]
print(b)  # [0, 1, 2, 3]

我明白为什么会这样,但这不是我想要的.解决方法是让呼叫者负责使用 list.copy :

I get why this happens, but it's not what I want. A fix would be to leave the caller in charge of using list.copy:

a = f().copy()
a.append(3)
b = f().copy()
print(a)  # [0, 1, 2, 3]
print(b)  # [0, 1, 2]

但是我想在 f 中修复此问题.一个漂亮的解决方案将是

However I would like to fix this inside f. A pretty solution would be something like

@functools.lru_cache(copy=True)
def f():
    ...

functools.lru_cache 实际上没有使用 copy 参数.

though no copy argument is actually taken by functools.lru_cache.

关于如何最好地实现这种行为的任何建议?

Any suggestion as to how to best implement this behavior?

根据holdenweb的回答,这是我的最终实现.默认情况下,它的行为与内置的 functools.lru_cache 完全相同,并在提供 copy = True 时以复制行为对其进行扩展.

Based on the answer from holdenweb, this is my final implementation. It behaves exactly like the builtin functools.lru_cache by default, and extends it with the copying behavior when copy=True is supplied.

import functools
from copy import deepcopy

def lru_cache(maxsize=128, typed=False, copy=False):
    if not copy:
        return functools.lru_cache(maxsize, typed)
    def decorator(f):
        cached_func = functools.lru_cache(maxsize, typed)(f)
        @functools.wraps(f)
        def wrapper(*args, **kwargs):
            return deepcopy(cached_func(*args, **kwargs))
        return wrapper
    return decorator

# Tests below

@lru_cache()
def f():
    x = [0, 1, 2]  # Stand-in for some long computation
    return x

a = f()
a.append(3)
b = f()
print(a)  # [0, 1, 2, 3]
print(b)  # [0, 1, 2, 3]

@lru_cache(copy=True)
def f():
    x = [0, 1, 2]  # Stand-in for some long computation
    return x

a = f()
a.append(3)
b = f()
print(a)  # [0, 1, 2, 3]
print(b)  # [0, 1, 2]

推荐答案

由于 lru_cache 装饰器的行为不适合您,因此,您最好的办法就是构建自己的装饰器,该装饰器返回一个复制 lru_cache 获取的内容.这将意味着第一次调用带有一组特定的参数将创建该对象的两个副本,因为现在缓存将仅保存原型对象.

Since the lru_cache decorator has unsuitable behaviour for you, the best you can do is to build your own decorator that returns a copy of what it gets from lru_cache. This will mean that the first call with a particular set of arguments will create two copies of the object, since now the cache will only be holding prototype objects.

这个问题变得更加困难,因为 lru_cache 可以接受参数( mazsize typed ),因此调用 lru_cache 返回一个装饰器.记住装饰器将一个函数作为其参数,并且(通常)返回一个函数,您必须将 lru_cache 替换为一个带有两个参数的函数,并返回一个将一个函数作为参数的函数,并且返回一个(包装的)函数,该函数不是一个容易缠住你的头的结构.

This question is made more difficult because lru_cache can take arguments (mazsize and typed), so a call to lru_cache returns a decorator. Remembering that a decorator takes a function as its argument and (usually) returns a function, you will have to replace lru_cache with a function that takes two arguments and returns a function that takes a function as an argument and returns a (wrapped) function which is not an easy structure to wrap your head around.

然后,您将使用 copying_lru_cache 装饰器而非标准的装饰器(现在已在更新的装饰器中手动"应用)来编写函数.

You would then write your functions using the copying_lru_cache decorator instead of the standard one, which is now applied "manually" inside the updated decorator.

根据突变的严重程度,您可能无需使用Deepcopy就可以逃脱,但是您没有提供足够的信息来确定突变.

Depending on how heavy the mutations are, you might get away without using deepcopy, but you don't give enough information to determine that.

这样您的代码便会读取

from functools import lru_cache
from copy import deepcopy

def copying_lru_cache(maxsize=10, typed=False):
    def decorator(f):
        cached_func = lru_cache(maxsize=maxsize, typed=typed)(f)
        def wrapper(*args, **kwargs):
            return deepcopy(cached_func(*args, **kwargs))
        return wrapper
    return decorator

@copying_lru_cache()
def f(arg):
    print(f"Called with {arg}")
    x = [0, 1, arg]  # Stand-in for some long computation
    return x

print(f(1), f(2), f(3), f(1))

此打印

Called with 1
Called with 2
Called with 3
[0, 1, 1] [0, 1, 2] [0, 1, 3] [0, 1, 1]

因此,您需要的缓存行为似乎已经出现.另请注意 lru_cache 的文档特别警告

so the cacheing behaviour your require appears to be present. Note also tht the documentation for lru_cache specifically warns that

通常,仅当您要重用以前计算的值时才应使用LRU缓存.因此,缓存具有副作用的函数,需要在每次调用时创建不同的可变对象的函数或不纯函数(例如time()或random())都是没有意义的.

In general, the LRU cache should only be used when you want to reuse previously computed values. Accordingly, it doesn’t make sense to cache functions with side-effects, functions that need to create distinct mutable objects on each call, or impure functions such as time() or random().

这篇关于如何获取functools.lru_cache以返回新实例?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆