如何实现线程安全的 LRU 缓存驱逐? [英] How to implement thread-safe LRU cache eviction?

查看:161
本文介绍了如何实现线程安全的 LRU 缓存驱逐?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经实现了一个 LRU 缓存(代码) ,我想用于 N 个元素和完整 N^2(所有对)匹配的多线程匹配问题.理想情况下,我会直接从缓存中获取对每个元素的引用以节省内存.

I've implemented an LRU cache (code) that I would like to use for a multithreaded matching problem with N elements and full N^2 (all pairs) matching. Ideally, I would just get a reference to each element directly from the cache to save memory.

匹配两个元素(我们称它们为 A 和 B)所需的时间可能会有很大差异,我担心如果一对元素需要很长时间才能匹配,那么另一个线程(工作速度非常快且处理许多对)将导致 A 或 B 从缓存中被逐出,从而使引用无效.

The time is takes to match two elements (lets call them A and B) can greatly vary, and I am worried that if one pair of elements takes a long time to match then another thread (that is working very fast and processing many pairs) will cause either A or B to be evicted from the cache, making the references invalid.

一个简单的解决方案是不使用引用,但我想知道是否有更好的方法来确保元素在当前使用"或引用它们时不会被驱逐?

A simple solution is to just not use references, but I'm wondering if there is a better way to ensure that elements won't be evicted if they are "currently used" or have a reference to them?

推荐答案

为了避免驱逐正在使用的对象,可以使用 std::shared_ptr 的引用计数功能.考虑以下实现:

To avoid evicting objects that are in use it is possible to use the reference-counting functionality of std::shared_ptr. Consider the following implementation:

#include <iostream>
#include <string>
#include <memory>
#include <map>
#include <algorithm>

template <typename K, typename V> class cache
{
public:
    cache() {}

    static const constexpr int max_cache_size = 2;

    std::shared_ptr<V> getValue(const K& k)
    {
        auto iter = cached_values.find(k);
        if (iter == cached_values.end()) {

            if (cached_values.size() == max_cache_size) {
                auto evictIter =
                    std::find_if(cached_values.begin(), cached_values.end(),
                        [](const auto& kv) { return kv.second.second.unique(); });

                if (evictIter == cached_values.end()) {
                    std::cout << "Nothing to evict\n";
                    return nullptr;
                }

                cached_values.erase(evictIter);
            }

            static V next;

            iter = cached_values.insert(std::make_pair(k, std::make_pair(++next, nullptr))).first;
            iter->second.second = std::shared_ptr<V>(&iter->second.first, [](const auto&) {});
        }

        return iter->second.second;
    }

    std::map<K, std::pair<V, std::shared_ptr<V>>> cached_values;
};

int main()
{
    cache<int, int> c;

    std::cout << *c.getValue(10) << "\n";
    std::cout << *c.getValue(20) << "\n";
    std::cout << *c.getValue(30) << "\n";

    auto useOne = c.getValue(10);
    auto useTwo = c.getValue(20);
    std::cout << *c.getValue(20) << "\n"; // We can use stuff that is still in cache
    std::cout << c.getValue(30);          // Cache is full, also note no dereferencing
}

基本上,只要缓存之外的任何人使用返回值,std::shared_ptr::unique 就会返回 false,从而使缓存条目不可驱逐.

Basically, as long as anyone outside the cache uses the returned value, the std::shared_ptr::unique will return false, making the cache entry non-evictable.

实例

这篇关于如何实现线程安全的 LRU 缓存驱逐?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆