Java 进程内存使用情况(jcmd 与 pmap) [英] Java process memory usage (jcmd vs pmap)

查看:45
本文介绍了Java 进程内存使用情况(jcmd 与 pmap)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在 docker 容器内有一个在 Java 8 上运行的 Java 应用程序.该过程启动了一个 Jetty 9 服务器,并且正在部署一个 Web 应用程序.传递了以下 JVM 选项:-Xms768m -Xmx768m.

I have a java application running on Java 8 inside a docker container. The process starts a Jetty 9 server and a web application is being deployed. The following JVM options are passed: -Xms768m -Xmx768m.

最近发现进程消耗了大量内存:

Recently I noticed that the process consumes a lot of memory:

$ ps aux 1
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
app          1  0.1 48.9 5268992 2989492 ?     Ssl  Sep23   4:47 java -server ...

$ pmap -x 1
Address           Kbytes     RSS   Dirty Mode  Mapping
...
total kB         5280504 2994384 2980776

$ jcmd 1 VM.native_memory summary
1:

Native Memory Tracking:

Total: reserved=1378791KB, committed=1049931KB
-                 Java Heap (reserved=786432KB, committed=786432KB)
                            (mmap: reserved=786432KB, committed=786432KB) 

-                     Class (reserved=220113KB, committed=101073KB)
                            (classes #17246)
                            (malloc=7121KB #25927) 
                            (mmap: reserved=212992KB, committed=93952KB) 

-                    Thread (reserved=47684KB, committed=47684KB)
                            (thread #47)
                            (stack: reserved=47288KB, committed=47288KB)
                            (malloc=150KB #236) 
                            (arena=246KB #92)

-                      Code (reserved=257980KB, committed=48160KB)
                            (malloc=8380KB #11150) 
                            (mmap: reserved=249600KB, committed=39780KB) 

-                        GC (reserved=34513KB, committed=34513KB)
                            (malloc=5777KB #280) 
                            (mmap: reserved=28736KB, committed=28736KB) 

-                  Compiler (reserved=276KB, committed=276KB)
                            (malloc=146KB #398) 
                            (arena=131KB #3)

-                  Internal (reserved=8247KB, committed=8247KB)
                            (malloc=8215KB #20172) 
                            (mmap: reserved=32KB, committed=32KB) 

-                    Symbol (reserved=19338KB, committed=19338KB)
                            (malloc=16805KB #184025) 
                            (arena=2533KB #1)

-    Native Memory Tracking (reserved=4019KB, committed=4019KB)
                            (malloc=186KB #2933) 
                            (tracking overhead=3833KB)

-               Arena Chunk (reserved=187KB, committed=187KB)
                            (malloc=187KB) 

如您所见,RSS (2.8GB) 与虚拟机本机内存统计数据实际显示的内容(提交 1.0GB,保留 1.3GB)之间存在巨大差异.

As you can see there is a huge difference between the RSS (2,8GB) and what is actually being shown by VM native memory statistics (1.0GB commited, 1.3GB reserved).

为什么会有如此巨大的差异?我知道 RSS 还显示了共享库的内存分配,但在分析了 pmap 详细输出后,我意识到这不是共享库问题,而是内存被称为 [ anon ] 结构的某些东西消耗了.为什么 JVM 分配了这么多匿名内存块?

Why there is such huge difference? I understand that RSS also shows the memory allocation for shared libraries but after analysis of pmap verbose output I realized that it is not the shared libraries issue but rather memory is consumed by somehing whas is called [ anon ] structure. Why JVM allocated so much anonymous memory blocks?

我在搜索并发现了以下主题:为什么 JVM报告比 linux 进程驻留集大小更多的提交内存?然而,那里描述的情况有所不同,因为 RSS 显示的内存使用量少于 JVM 统计数据.我有相反的情况,无法找出原因.

I was searching and found out the following topic: Why does a JVM report more committed memory than the linux process resident set size? However the case described there is different, because less memory usage is shown by RSS than by JVM stats. I have opposite situation and can't figure out the reason.

推荐答案

我在我们的 Apache Spark 作业中遇到了类似的问题,我们将应用程序作为胖 jar 提交,在分析线程转储后,我们认为 Hibernate 是罪魁祸首是,我们过去常常在应用程序启动时加载休眠类,该类实际上是使用 java.util.zip.Inflater.inflateBytes 读取休眠类文件,这比我们的本机驻留内存使用量高出近 1.5gb ,这是在休眠中针对此问题提出的错误https://hibernate.atlassian.net/browse/HHH-10938?attachmentOrder=desc ,评论中建议的补丁对我们有用,希望这会有所帮助.

I was facing similar issue with one of our Apache Spark job where we were submitting our application as a fat jar, After analyzing thread dumps we figured that Hibernate is the culprit, we used to load hibernate classes on startup of the application which was actually using java.util.zip.Inflater.inflateBytes to read hibernate class files , this was overshooting our native resident memory usage by almost 1.5 gb , here is a bug raised in hibernate for this issue https://hibernate.atlassian.net/browse/HHH-10938?attachmentOrder=desc , the patch suggested in the comments worked for us, Hope this helps.

这篇关于Java 进程内存使用情况(jcmd 与 pmap)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆