如何使用LinkedHashMap中的类似功能实现ConcurrentHashMap? [英] How to implement ConcurrentHashMap with features similar in LinkedHashMap?

查看:191
本文介绍了如何使用LinkedHashMap中的类似功能实现ConcurrentHashMap?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用 LinkedHashMap accessOrder true,并且随时允许最多500个条目作为LRU缓存数据。但由于可伸缩性问题,我想转向一些线程安全的替代方案。 ConcurrentHashMap 在这方面看起来不错,但缺少 accessOrder removeEldestEntry(Map。条目e) LinkedHashMap 中找到。任何人都可以指向某些链接或帮助我简化实施。

I have used LinkedHashMap with accessOrder true along with allowing a maximum of 500 entries at any time as the LRU cache for data. But due to scalability issues I want to move on to some thread-safe alternative. ConcurrentHashMap seems good in that regard, but lacks the features of accessOrder and removeEldestEntry(Map.Entry e) found in LinkedHashMap. Can anyone point to some link or help me to ease the implementation.

推荐答案

我最近用 ConcurrentHashMap< String,CacheEntry> 做了类似的事情。 ,其中CacheEntry包装实际项目并添加缓存逐出统计:到期时间,插入时间(用于FIFO / LIFO驱逐),上次使用时间(用于LRU / MRU驱逐),命中数(用于LFU / MFU驱逐)等。实际的驱逐是同步的,并创建一个 ArrayList< CacheEntry> ,并使用相应的Comparator进行驱逐策略的Collections.sort()。由于这是昂贵的,因此每次驱逐都会从最后5%的CacheEntries中消失。我确信性能调优会有所帮助。

I did something similar recently with ConcurrentHashMap<String,CacheEntry>, where CacheEntry wraps the actual item and adds cache eviction statistics: expiration time, insertion time (for FIFO/LIFO eviction), last used time (for LRU/MRU eviction), number of hits (for LFU/MFU eviction), etc. The actual eviction is synchronized and creates an ArrayList<CacheEntry> and does a Collections.sort() on it using the appropriate Comparator for the eviction strategy. Since this is expensive, each eviction then lops off the bottom 5% of the CacheEntries. I'm sure performance tuning would help though.

在你的情况下,既然你在做FIFO,你可以保持一个单独的 ConcurrentLinkedQueue 。将对象添加到ConcurrentHashMap时,请执行该对象的ConcurrentLinkedQueue.add()。如果要逐出条目,请执行ConcurrentLinkedQueue.poll()删除最旧的对象,然后将其从ConcurrentHashMap中删除。

In your case, since you're doing FIFO, you could keep a separate ConcurrentLinkedQueue. When you add an object to the ConcurrentHashMap, do a ConcurrentLinkedQueue.add() of that object. When you want to evict an entry, do a ConcurrentLinkedQueue.poll() to remove the oldest object, then remove it from the ConcurrentHashMap as well.

更新:其他可能性这个区域包括一个Java Collections 同步包装器和Java 1.6 ConcurrentSkipListMap

Update: Other possibilities in this area include a Java Collections synchronization wrapper and the Java 1.6 ConcurrentSkipListMap.

这篇关于如何使用LinkedHashMap中的类似功能实现ConcurrentHashMap?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆