LRU缓存设计 [英] LRU cache design

查看:114
本文介绍了LRU缓存设计的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

最近使用的(LRU)缓存最早丢弃最近使用的项目
您如何设计和实现这样的缓存类?设计要求如下:



1)尽可能快地找到项目



2)一次缓存未命中,缓存已满,我们需要尽可能快地替换最近使用最少的项。



如何在设计模式方面分析和实现这个问题和算法设计?

解决方案

链表列表节点的指针哈希表是实现LRU缓存的通常方式。这给出了O(1)操作(假设一个体面的哈希)。这个优点(是O(1)):你可以通过锁定整个结构来做多线程版本。您不必担心粒度锁定等。



简单来说,它的工作原理:



On访问值时,将链接列表中的相应节点移动到头部。



当您需要从缓存中删除一个值时,您将从尾部删除结束。



当您将值添加到缓存时,您只需将其放在链表的头部。



感谢doublep,这里是一个C ++实现的站点:其他容器模板


Least Recently Used (LRU) Cache is to discard the least recently used items first How do you design and implement such a cache class? The design requirements are as follows:

1) find the item as fast as we can

2) Once a cache misses and a cache is full, we need to replace the least recently used item as fast as possible.

How to analyze and implement this question in terms of design pattern and algorithm design?

解决方案

A linked list + hashtable of pointers to the linked list nodes is the usual way to implement LRU caches. This gives O(1) operations (assuming a decent hash). Advantage of this (being O(1)): you can do a multithreaded version by just locking the whole structure. You don't have to worry about granular locking etc.

Briefly, the way it works:

On an access of a value, you move the corresponding node in the linked list to the head.

When you need to remove a value from the cache, you remove from the tail end.

When you add a value to cache, you just place it at the head of the linked list.

Thanks to doublep, here is site with a C++ implementation: Miscellaneous Container Templates.

这篇关于LRU缓存设计的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆