实现LRU缓存的最佳方式 [英] Best way to implement LRU cache

查看:239
本文介绍了实现LRU缓存的最佳方式的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在看这个LRU缓存实现的问题,其中缓存的大小已满之后,最近最少使用的项被弹出,并被新项替换。



我有两个实现:



1)。创建两个看起来像这样的地图

  std :: map< timestamp,k& time_to_key 
std :: map< key,std :: pair< timestamp,V> LRUCache

要插入新元素,我们可以将当前时间戳和值放入 LRUCache 。当高速缓存的大小已满时,我们可以通过找到在时间 _to_ 中存在的最小时间戳并从 LRUCache
插入新项目是O(1),更新时间戳记是O(n)(因为我们需要在时间中搜索与时间戳对应的 k _to_



2)。有一个链表,其中最近最少使用的缓存出现在头部,新项目添加在尾部。当到达已经存在于高速缓存中的项目时,与该项目的键相对应的节点被移动到列表的尾部。
插入一个新项目是O(1),更新时间戳记又是O(n)(因为我们需要移动到列表的尾部),并且删除一个元素是O(1)。



现在我有以下问题:



  1. 是否有任何其他方式来实现LRU缓存。


  2. 在Java中,我应该使用HashMap来实现LRUCache


  3. 我已经看到像实现一个通用LRU缓存的问题, LRU缓存。通用LRU缓存与LRU缓存不同?


提前感谢!!!



EIDT:



在Java中实现LRUCache的另一种方法(最简单的方法)是使用LinkedHashMap, boolean removeEldestEntry(Map.entry eldest)function。

解决方案

如果你想要一个LRU缓存,Java中最简单的就是LinkedHashMap。默认行为是FIFO,但是您可以将其更改为访问顺序,这使它成为LRU缓存。

  public static< K,V>地图< K,V> lruCache(final int maxSize){
return new LinkedHashMap< K,V>(maxSize * 4/3,0.75f,true){
@Override
protected boolean removeEldestEntry(Map.Entry< K,V> eldest){
return size()> maxSize;
}
};
}

注意:我使用构造函数,将集合从最新的最近使用了第一个。



从Javadoc

  public LinkedHashMap initialCapacity,
float loadFactor,
boolean accessOrder)
构造具有指定初始容量,负载因子和排序模式的空LinkedHashMap实例。
参数:
initialCapacity - 初始容量
loadFactor - 装入因子
accessOrder - 排序模式 - 对于访问顺序为true,对插入顺序为false

当accessOrder true 时,LinkedHashMap会重新排列



这样,最旧的条目是最近使用的最少条目。


I was looking at this problem of LRU cache implementation where after the size of the cache is full, least recently used item is poped out and it is replaced by the new item.

I have two implementations in mind:

1). Create two maps which looks something like this

std::map<timestamp, k> time_to_key
std::map<key, std::pair<timestamp, V>> LRUCache

To insert a new element, we can put the current timestamp and value in the LRUCache. While when the size of the cache is full, we can evict the least recent element by finding the smallest timestamp present in time_to_key and removing the corresponding key from LRUCache. Inserting a new item is O(1), updating the timestamp is O(n) (because we need to search the k corresponding to the timestamp in time_to_key.

2). Have a linkedlist in which the least recently used cache is present at the head and the new item is added at the tail. When an item arrives which is already present in the cache, the node corresponding to the key of that item is moved to the tail of the list. Inserting a new item is O(1), updating the timestamp is again O(n) (because we need to move to the tail of the list), and deleting an element is O(1).

Now I have the following quesitons:

  1. Which one of these implementation is better for a LRUCache.

  2. Is there any other way to imeplemnt the LRU Cache.

  3. In Java, should i use HashMap to implement the LRUCache

  4. I have seen questions like implement a generic LRU cache, and also have seen questions like implement a LRU cache. Is generic LRU cache different from LRU cache?

Thanks in advance!!!

EIDT:

Another way (easiest way) to implement LRUCache in Java is by using LinkedHashMap and by overriding the boolean removeEldestEntry(Map.entry eldest) function.

解决方案

If you want an LRU cache, the simplest in Java is LinkedHashMap. The default behaviour is FIFO however you can changes it to "access order" which makes it an LRU cache.

public static <K,V> Map<K,V> lruCache(final int maxSize) {
    return new LinkedHashMap<K, V>(maxSize*4/3, 0.75f, true) {
        @Override
        protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
            return size() > maxSize;
        }
    };
}

Note: I have using the constructor which changes the collection from newest first to most recently used first.

From the Javadoc

public LinkedHashMap(int initialCapacity,
                     float loadFactor,
                     boolean accessOrder)
Constructs an empty LinkedHashMap instance with the specified initial capacity, load factor and ordering mode.
Parameters:
initialCapacity - the initial capacity
loadFactor - the load factor
accessOrder - the ordering mode - true for access-order, false for insertion-order

When accessOrder is true the LinkedHashMap re-arranges the order of the map whenever you get() an entry which is not the last one.

This way the oldest entry is the least which is recently used.

这篇关于实现LRU缓存的最佳方式的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆