Java ConcurrentHashMap(在Tomcat下)充分利用的内存 [英] Memory Fully utilized by Java ConcurrentHashMap (under Tomcat)

查看:297
本文介绍了Java ConcurrentHashMap(在Tomcat下)充分利用的内存的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这是一个内存堆栈(用作高速缓存),除了静态的ConcurrentHashMap(CHM)外什么都没有.

This is a memory stack (serves as a cache) that consist of nothing but a static ConcurrentHashMap (CHM).

所有传入的HTTP请求数据都存储在此ConcurrentHashMap中.并且有一个异步调度程序进程,该进程从同一ConcurrentHashMap中获取数据并在将key.value存储到数据库中之后将其删除.

All incoming HTTP request data are store in this ConcurrentHashMap. And there is a asynch scheduler process that takes the data from the same ConcurrentHashMap and remove the key.value after storing them into the Database.

该系统运行良好且流畅,但仅在以下条件下才能发现,内存已充分利用(2.5GB),并且所有CPU时间都用于执行GC:

This system runs fine and smooth but just discover under following criteria, the memory was fully utilized (2.5GB) and all CPU time was taken to perform GC:

-并行http命中率为1000/s

-concurrent http hit of 1000/s

-保持相同的并发命中15分钟

-maintain the same concurrent hit for a period of 15 minutes

异步过程每次写入数据库时​​都会记录CHM的剩余大小. CHM.size()维持在Min:300到Max:3500左右

The asynch process log the remaining size of the CHM everytime it writes to database. The CHM.size() maintain at around Min:300 to Max:3500

我认为此应用程序存在内存泄漏.所以我用Eclipse MAT来查看堆转储.运行可疑报告"后,我从MAT得到了以下评论:

I thought there is a Memory Leak on this application. so i used Eclipse MAT to look at the Heap Dump. After running the Suspect Report, i got these comments from MAT:

由"org.apache.catalina.loader.StandardClassLoader @ 0x853f0280"加载的"org.apache.catalina.session.StandardManager"的一个实例占用2,135,429,456(94.76%)字节.内存存储在由"加载的"java.util.concurrent.ConcurrentHashMap $ Segment []"的一个实例中.

One instance of "org.apache.catalina.session.StandardManager" loaded by "org.apache.catalina.loader.StandardClassLoader @ 0x853f0280" occupies 2,135,429,456 (94.76%) bytes. The memory is accumulated in one instance of "java.util.concurrent.ConcurrentHashMap$Segment[]" loaded by "".

3,646,166 instances of java.util.concurrent.ConcurrentHashMap$Segment retain >= 2,135,429,456 bytes.

Length    # Objects      Shallow Heap      Retained Heap 
0         3,646,166      482,015,968       >= 2,135,429,456 

上方的长度0将其转换为CHM内部的空长度记录(每次我调用CHM.remove()方法).它与数据库中的记录数一致,创建此转储时,数据库中有3,646,166条记录

The length 0 above i translate it as empty length record inside the CHM (each time i call CHM.remove() method). It is consistent to the number of record inside the database, 3,646,166 records was inside the database when this dump was created

奇怪的情况是:如果我暂停压力测试,堆内存中的利用率将逐渐释放到25MB.这大约需要30-45分钟.我已经重新模拟了此应用程序,曲线看起来类似于下面的VisualVM Graph:

The strange scenario is: if i pause the stress test, the utilization in Heap Memory will gradually release down to 25MB.This takes about 30-45 minutes. i have re-simulate this application and the curves looks similar to the VisualVM Graph below:

这里有问题:

1)这看起来像是内存泄漏吗?

1) Does this looks like a Memory Leak?

2)每个要从CHM删除<key:value>的删除调用remove(Object key, Object value),该删除的对象是否获得GC?

2) Each remove call remove(Object key, Object value) to remove a <key:value> from CHM, does that removed object get GC?

3)这与GC设置有关吗?我添加了以下GC参数但没有帮助:

3) Is this something to do with the GC settings? i have added the following GC parameters but without help:

-XX:+UseParallelGC

-XX:+UseParallelOldGC

-XX:GCTimeRatio=19

-XX:+PrintGCTimeStamps

-XX:ParallelGCThreads=6

-verbose:gc

4)非常感谢您解决此问题的任何想法! :)

4) Any idea to resolve this is very much appreciated! :)

5)可能因为我所有的参考文献都是硬性参考吗?我的理解是,只要HTTP会话结束,所有非静态的变量现在都可用于GC.

NEW 5) Could it be possible because all my reference are hard reference? My understanding is as long as the HTTP session is ended, all those variables that is not static are now available for GC.

注意,我尝试用ehcache 2.2.0替换CHM,但是遇到同样的OutOfMemoryException问题.我想ehcache也正在使用ConcurrentHashMap.

NEW Note I tried replace the CHM with ehcache 2.2.0, but i get the same OutOfMemoryException problem. i suppose ehcache is also using ConcurrentHashMap.

服务器规格:

-至强四核,8线程.

-Xeon Quad core, 8 threads.

-4GB内存

-Windows 2008 R2

-Windows 2008 R2

-Tomcat 6.0.29

-Tomcat 6.0.29

推荐答案

此问题困扰了我7天!最后,我发现了真正的问题!以下是我尝试但未能解决OutOfMemory异常的任务:

This problem has bug me for a bad 7 days! And finally i found out the real problem! Below are the tasks on what i have tried but failed to solve the OutOfMemory Exception:

-从使用并发哈希映射更改为ehcache. (原来ehcache也正在使用ConcurrentHashMap)

-change from using concurrenthashmap to ehcache. (turns out ehcache is also using ConcurrentHashMap)

-将所有硬引用更改为软引用"

-change all the hard reference to Soft Reference

-按照 Dr.亨氏·卡布兹(Heinz M. Kabutz)

百万美元的问题实际上是为什么30-45分钟后,内存开始释放回堆池?"

The million dollar question is really "why 30-45 minutes later, memory starting to release back to the heap pool?"

实际的根本原因是因为还有其他内容仍在保存实际的变量会话,而罪魁祸首是tomcat中的http会话仍然处于活动状态!因此,即使http会话已完成,但如果超时设置为30分钟,tomcat将在JVM可以GC之前将会话信息保留30分钟.将超时设置更改为1分钟作为测试后,问题立即解决.

The actual root cause was because there is something else still holding the actual variable session, and the culprit is the http session within tomcat is still active! Hence, even though the http session was completed, but if the timeout setting is 30 minutes, tomcat will hold the session information for 30 minutes before JVM can GC those. Problem solve immediately after changing the timeout setting to 1 minute as testing.

$tomcat_folder\conf\web.xml

<session-config>
    <session-timeout>1</session-timeout>
</session-config>

希望这将对任何有类似问题的人有所帮助.

Hope this will help anyone out there with similar problem.

这篇关于Java ConcurrentHashMap(在Tomcat下)充分利用的内存的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆