Java进程内存使用情况(jcmd vs pmap) [英] Java process memory usage (jcmd vs pmap)

查看:748
本文介绍了Java进程内存使用情况(jcmd vs pmap)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在Docker容器中的Java 8上运行了一个java应用程序。该过程启动Jetty 9服务器并部署Web应用程序。传递以下JVM选项: -Xms768m -Xmx768m

I have a java application running on Java 8 inside a docker container. The process starts a Jetty 9 server and a web application is being deployed. The following JVM options are passed: -Xms768m -Xmx768m.

最近我注意到这个过程消耗了很多记忆:

Recently I noticed that the process consumes a lot of memory:

$ ps aux 1
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
app          1  0.1 48.9 5268992 2989492 ?     Ssl  Sep23   4:47 java -server ...

$ pmap -x 1
Address           Kbytes     RSS   Dirty Mode  Mapping
...
total kB         5280504 2994384 2980776

$ jcmd 1 VM.native_memory summary
1:

Native Memory Tracking:

Total: reserved=1378791KB, committed=1049931KB
-                 Java Heap (reserved=786432KB, committed=786432KB)
                            (mmap: reserved=786432KB, committed=786432KB) 

-                     Class (reserved=220113KB, committed=101073KB)
                            (classes #17246)
                            (malloc=7121KB #25927) 
                            (mmap: reserved=212992KB, committed=93952KB) 

-                    Thread (reserved=47684KB, committed=47684KB)
                            (thread #47)
                            (stack: reserved=47288KB, committed=47288KB)
                            (malloc=150KB #236) 
                            (arena=246KB #92)

-                      Code (reserved=257980KB, committed=48160KB)
                            (malloc=8380KB #11150) 
                            (mmap: reserved=249600KB, committed=39780KB) 

-                        GC (reserved=34513KB, committed=34513KB)
                            (malloc=5777KB #280) 
                            (mmap: reserved=28736KB, committed=28736KB) 

-                  Compiler (reserved=276KB, committed=276KB)
                            (malloc=146KB #398) 
                            (arena=131KB #3)

-                  Internal (reserved=8247KB, committed=8247KB)
                            (malloc=8215KB #20172) 
                            (mmap: reserved=32KB, committed=32KB) 

-                    Symbol (reserved=19338KB, committed=19338KB)
                            (malloc=16805KB #184025) 
                            (arena=2533KB #1)

-    Native Memory Tracking (reserved=4019KB, committed=4019KB)
                            (malloc=186KB #2933) 
                            (tracking overhead=3833KB)

-               Arena Chunk (reserved=187KB, committed=187KB)
                            (malloc=187KB) 

正如您所看到的,RSS(2,8GB)与VM本机内存统计实际显示的内容(1.0GB提交,1.3GB保留)之间存在巨大差异。

As you can see there is a huge difference between the RSS (2,8GB) and what is actually being shown by VM native memory statistics (1.0GB commited, 1.3GB reserved).

为什么会有这么大的差异?我知道RSS也显示了共享库的内存分配,但在分析了 pmap 详细输出后,我意识到它不是共享库的问题,而是内存被某些人消耗掉了被称为[anon]结构。为什么JVM分配了这么多匿名内存块?

Why there is such huge difference? I understand that RSS also shows the memory allocation for shared libraries but after analysis of pmap verbose output I realized that it is not the shared libraries issue but rather memory is consumed by somehing whas is called [ anon ] structure. Why JVM allocated so much anonymous memory blocks?

我正在搜索并找到以下主题:
为什么JVM报告的内存比linux进程驻留集大小更多?
但是描述的情况有所不同,因为RSS显示的内存使用量少于JVM统计数据。我有相反的情况,无法弄清楚原因。

I was searching and found out the following topic: Why does a JVM report more committed memory than the linux process resident set size? However the case described there is different, because less memory usage is shown by RSS than by JVM stats. I have opposite situation and can't figure out the reason.

推荐答案

我遇到了与我们的一个Apache Spark工作类似的问题我们将应用程序作为胖jar提交的地方,在分析了线程转储之后,我们认为Hibernate是罪魁祸首,我们曾经在启动应用程序时加载hibernate类,实际上使用的是 java.util.zip。 Inflater.inflateBytes 读取hibernate类文件,这超过了我们的本机驻留内存使用量几乎1.5 gb,这是hibernate为此问题引发的错误
https://hibernate.atlassian.net/browse/HHH-10938?attachmentOrder=desc ,the在为我们工作的评论中建议补丁,希望这会有所帮助。

I was facing similar issue with one of our Apache Spark job where we were submitting our application as a fat jar, After analyzing thread dumps we figured that Hibernate is the culprit, we used to load hibernate classes on startup of the application which was actually using java.util.zip.Inflater.inflateBytes to read hibernate class files , this was overshooting our native resident memory usage by almost 1.5 gb , here is a bug raised in hibernate for this issue https://hibernate.atlassian.net/browse/HHH-10938?attachmentOrder=desc , the patch suggested in the comments worked for us, Hope this helps.

这篇关于Java进程内存使用情况(jcmd vs pmap)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆