Django中的每请求缓存? [英] Per-request cache in Django?

查看:115
本文介绍了Django中的每请求缓存?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想实现一个装饰器,它为任何方法提供每个请求的缓存,而不仅仅是视图。以下是一个示例用例。


我有一个自定义标签,用于确定长列表中的
a记录是否为
a最爱。为了检查
项是否是一个收藏夹,您必须查询
数据库。理想情况下,您将
执行一个查询来获取所有的
收藏夹,然后检查每个记录
缓存列表。



一个解决方案是在视图中获取所有
收藏夹,然后通过
设置到模板中,然后
到每个标签调用。



或者,标签本身可以
执行查询本身,但只有
第一次被调用。然后,
结果可以缓存以用于后续的
调用。有利的是,您可以使用任何
视图中的任何模板
此标记,而无需提醒视图。



在现有的缓存机制,你
只能将结果缓存50ms,
,并假设这将与
当前请求相关联。我想让
相关可靠。


这是我当前拥有的标签示例。

  @ register.filter()
def is_favorite(记录,请求):

如果get_favorites在request.POST:
favorites = request.POST [get_favorites]
else:

favorites = get_favorites(request.user)

post = request.POST.copy()
post [get_favorites] =收藏夹
request.POST = post

返回收藏夹中的记录

有没有办法从Django获取当前的请求对象,没有传递它?从标签,我可以传递请求,这将始终存在。但是我想使用其他功能的这个装饰器。



是否有一个每个请求缓存的现有实现?

解决方案

使用自定义中间件,您可以获得一个Django缓存实例,保证每个请求被清除。



这是我在一个项目中使用的:

 从django.core.cache.backends.locmem导入导入currentThread 
导入LocMemCache

_request_cache = {}
_installed_middleware = False

def get_request_cache():
assert _installed_middleware,'RequestCacheMiddleware not loaded'
return _request_cache [currentThread()]

#LocMemCache是​​一个线程安全的本地内存缓存
class RequestCache(LocMemCache):
def __init __(self):
name ='locmemcache @%i'%hash(currentThread())
params = dict()
super(RequestCache,self).__ init __ name,params)

class RequestCacheMiddleware(object):
def __init __(self):
全局_installed_middleware
_installed_middleware = True

def process_request(self,request):
cache = _request_cache.get(currentThread())或RequestCache()
_request _cache [currentThread()] = cache

cache.clear()

要在settings.py中使用中间件注册它,例如:

  MIDDLEWARE_CLASSES =(
...
'myapp.request_cache.RequestCacheMiddleware'

然后可以使用缓存如下:$ / b

$ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ code>

有关详细信息,请参阅django低级缓存api文档:



Django低级缓存API



应该很容易修改memoize装饰器来使用请求缓存。看看Python Decorator Library是一个很好的例子的memoize装饰器:



Python Decorator Library


I would like to implement a decorator that provides per-request caching to any method, not just views. Here is an example use case.

I have a custom tag that determines if a record in a long list of records is a "favorite". In order to check if an item is a favorite, you have to query the database. Ideally, you would perform one query to get all the favorites, and then just check that cached list against each record.

One solution is to get all the favorites in the view, and then pass that set into the template, and then into each tag call.

Alternatively, the tag itself could perform the query itself, but only the first time it's called. Then the results could be cached for subsequent calls. The upside is that you can use this tag from any template, on any view, without alerting the view.

In the existing caching mechanism, you could just cache the result for 50ms, and assume that would correlate to the current request. I want to make that correlation reliable.

Here is an example of the tag I currently have.

@register.filter()
def is_favorite(record, request):

    if "get_favorites" in request.POST:
        favorites = request.POST["get_favorites"]
    else:

        favorites = get_favorites(request.user)

        post = request.POST.copy()
        post["get_favorites"] = favorites
        request.POST = post

    return record in favorites

Is there a way to get the current request object from Django, w/o passing it around? From a tag, I could just pass in request, which will always exist. But I would like to use this decorator from other functions.

Is there an existing implementation of a per-request cache?

解决方案

Using a custom middleware you can get a Django cache instance guaranteed to be cleared for each request.

This is what I used in a project:

from threading import currentThread
from django.core.cache.backends.locmem import LocMemCache

_request_cache = {}
_installed_middleware = False

def get_request_cache():
    assert _installed_middleware, 'RequestCacheMiddleware not loaded'
    return _request_cache[currentThread()]

# LocMemCache is a threadsafe local memory cache
class RequestCache(LocMemCache):
    def __init__(self):
        name = 'locmemcache@%i' % hash(currentThread())
        params = dict()
        super(RequestCache, self).__init__(name, params)

class RequestCacheMiddleware(object):
    def __init__(self):
        global _installed_middleware
        _installed_middleware = True

    def process_request(self, request):
        cache = _request_cache.get(currentThread()) or RequestCache()
        _request_cache[currentThread()] = cache

        cache.clear()

To use the middleware register it in settings.py, e.g:

MIDDLEWARE_CLASSES = (
    ...
    'myapp.request_cache.RequestCacheMiddleware'
)

You may then use the cache as follows:

from myapp.request_cache import get_request_cache

cache = get_request_cache()

Refer to the django low level cache api doc for more information:

Django Low-Level Cache API

It should be easy to modify a memoize decorator to use the request cache. Have a look at the Python Decorator Library for a good example of a memoize decorator:

Python Decorator Library

这篇关于Django中的每请求缓存?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆