在JVM中增加常驻大小集 [英] Growing Resident Size Set in JVM

查看:98
本文介绍了在JVM中增加常驻大小集的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个运行在版本为"CentOS Linux版本7.3.1611"的64位LINUX上的JAVA进程.拥有7.6GB的RAM.

I have a JAVA process running on 64bit LINUX with version "CentOS Linux release 7.3.1611" with 7.6GB of RAM.

下面是一些使用的JVM标志,

Below are some of the used JVM flags,

  1. -Xmx3500m
  2. -Xms3500m
  3. -XX:MaxMetaspaceSize = 400m
  4. -XX:CompressedClassSpaceSize = 35m

注意:默认使用线程堆栈(1MB)和代码缓存(240MB),JDK版本是1.8.0_252.

Note : Size of the Thread stack (1MB) and code cache (240MB) are taken as default and JDK version is 1.8.0_252.

在运行TOP命令时,它观察到Java占用了6.3 GB的RAM. 过程.

While running the TOP command, its observed that 6.3 GB of my RAM is held by the java process.

PR   NI    VIRT     RES    SHR S  %CPU %MEM   TIME+   COMMAND   
20   0  28.859g  6.341g  22544 S 215.2 83.1   4383:23 java    

我试图使用JCMD,JMAP和JSTAT命令来分析JVM的本机内存.

I tried to analyse the native memory of JVM using JCMD, JMAP and JSTAT commands.

JMAP -heap命令的输出:

Output of JMAP -heap command :

Debugger attached successfully.
Server compiler detected.
JVM version is 25.252-b14

using thread-local object allocation.
Garbage-First (G1) GC with 33 thread(s)

Heap Configuration:
   MinHeapFreeRatio         = 40
   MaxHeapFreeRatio         = 70
   MaxHeapSize              = 3670016000 (3500.0MB)
   NewSize                  = 1363144 (1.2999954223632812MB)
   MaxNewSize               = 2202009600 (2100.0MB)
   OldSize                  = 5452592 (5.1999969482421875MB)
   NewRatio                 = 2
   SurvivorRatio            = 8
   MetaspaceSize            = 21807104 (20.796875MB)
   CompressedClassSpaceSize = 36700160 (35.0MB)
   MaxMetaspaceSize         = 419430400 (400.0MB)
   G1HeapRegionSize         = 1048576 (1.0MB)

Heap Usage:
G1 Heap:
   regions  = 3500
   capacity = 3670016000 (3500.0MB)
   used     = 1735444208 (1655.048568725586MB)
   free     = 1934571792 (1844.951431274414MB)
   47.28710196358817% used
G1 Young Generation:
Eden Space:
   regions  = 1311
   capacity = 2193620992 (2092.0MB)
   used     = 1374683136 (1311.0MB)
   free     = 818937856 (781.0MB)
   62.66730401529637% used
Survivor Space:
   regions  = 113
   capacity = 118489088 (113.0MB)
   used     = 118489088 (113.0MB)
   free     = 0 (0.0MB)
   100.0% used
G1 Old Generation:
   regions  = 249
   capacity = 1357905920 (1295.0MB)
   used     = 241223408 (230.04856872558594MB)
   free     = 1116682512 (1064.951431274414MB)
   17.76436824135799% used

485420 interned Strings occupying 83565264 bytes.

JSTAT -gc命令的输出:

Output of JSTAT -gc command :

 S0C    S1C    S0U    S1U      EC       EU        OC         OU       MC     MU    CCSC   CCSU   YGC     YGCT    FGC    FGCT     GCT   
 0.0   33792.0  0.0   33792.0 1414144.0 1204224.0 2136064.0  1558311.7  262872.0 259709.5 19200.0 18531.5  22077  985.995  10     41.789 1027.785
 0.0   33792.0  0.0   33792.0 1414144.0 1265664.0 2136064.0  1558823.7  262872.0 259709.5 19200.0 18531.5  22077  985.995  10     41.789 1027.785
 0.0   63488.0  0.0   63488.0 124928.0 32768.0  3395584.0  1526795.8  262872.0 259709.5 19200.0 18531.5  22078  986.041  10     41.789 1027.830
 0.0   63488.0  0.0   63488.0 124928.0 49152.0  3395584.0  1526795.8  262872.0 259709.5 19200.0 18531.5  22078  986.041  10     41.789 1027.830
 0.0   63488.0  0.0   63488.0 124928.0 58368.0  3395584.0  1526795.8  262872.0 259709.5 19200.0 18531.5  22078  986.041  10     41.789 1027.830

即使"JCMD pid VM.native_memory摘要" 的输出所产生的总和约为5.0GB,甚至不接近6.3GB.所以我找不到在哪里使用余额1.3GB.

Even the sum produced by the output of "JCMD pid VM.native_memory summary" is 5.0GB approx which is not even nearest to 6.3GB. So I could not find where the balance 1.3GB was used.

我试图找到6.3GB与JVM的实际映射关系.因此,我决定检查/proc/pid文件夹.

I tried to find how the 6.3GB is actually mapped with JVM. So I decided to inspect /proc/pid folder.

在/proc/pid/status文件中,

In /proc/pid/status file ,

VmRSS   : 6649680 kB 
RssAnon :   6627136 kB
RssFile :     22544 kB
RssShmem:         0 kB 

由此我发现,匿名空间占据了6.3GB的绝大部分空间.

From this I found that most of the 6.3GB space is occupied by the anonymous space.

PMAP命令的输出(被截断):

Output of PMAP command (truncated):

Address           Kbytes     RSS   Dirty Mode  Mapping
0000000723000000 3607296 3606076 3606076 rw---   [ anon ]
00000007ff2c0000   12544       0       0 -----   [ anon ]
00007f4584000000     132       4       4 rw---   [ anon ]
00007f4584021000   65404       0       0 -----   [ anon ]
00007f4588000000     132      12      12 rw---   [ anon ]
00007f4588021000   65404       0       0 -----   [ anon ]
00007f458c000000     132       4       4 rw---   [ anon ]
00007f458c021000   65404       0       0 -----   [ anon ]
00007f4590000000     132       4       4 rw---   [ anon ]
00007f4590021000   65404       0       0 -----   [ anon ]
00007f4594000000     132       8       8 rw---   [ anon ]
00007f4594021000   65404       0       0 -----   [ anon ]
00007f4598000000     132       4       4 rw---   [ anon ]
00007f4598021000   65404       0       0 -----   [ anon ]
00007f459c000000    2588    2528    2528 rw---   [ anon ]

我发现第一个匿名地址可能是映射给堆内存的,因为它的大小为3.4GB.但是,我无法找到其余匿名空间的使用方式.

I found that first anonymous address might be mapped for heap memory since its size 3.4GB. However, I was not able to find how the rest of the anonymous space was used.

我需要帮助来了解JVM进程如何使用额外的1.3 GB.

I need help in finding out, how the extra 1.3 GB is used by the JVM process.

除了本机内存跟踪中提到的有关JVM所使用的内存的任何信息外,我们将不胜感激.

Any information on memory used by the JVM other than mentioned in Native Memory Tracking would be appreciated.

推荐答案

此处所述,除了本机覆盖的区域内存跟踪,还有其他一些事情会在JVM进程中消耗内存.

As discussed here, besides areas covered by Native Memory Tracking, there are other things that consume memory in the JVM process.

许多大小恰好为64MB的匿名区域(例如在您的pmap输出中)表明这些区域是问题,并且内存使用过多,尤其是在具有多个线程的应用程序.我建议使用 jemalloc (或 tcmalloc mimalloc )作为替代品标准分配器-它没有提到的泄漏.另一种解决方案是使用MALLOC_ARENA_MAX环境变量限制malloc竞技场的数量.

Many anonymous regions of exactly 64MB in size (like in your pmap output) suggest that these are malloc arenas. The standard glibc allocator is known to have issues with excessive memory usage, especially in applications with many threads. I suggest using jemalloc (or tcmalloc, mimalloc) as a drop-in replacement for the standard allocator - it does not have the mentioned leak. An alternative solution is to limit the number of malloc arenas with MALLOC_ARENA_MAX environment variable.

如果即使切换到jemalloc后问题仍然存在,则可能是本机内存泄漏的迹象.例如,Java应用程序中的本机泄漏可能是由于

If the problem persists even after switching to jemalloc, this is likely a sign of a native memory leak. For example, native leaks in a Java application may be caused by

  • 未封闭的资源/流:ZipInputStreamDirectoryStreamInflaterDeflater
  • JNI库和代理库,包括标准的jdwp代理
  • 字节码检测不正确
  • unclosed resources/streams: ZipInputStream, DirectoryStream, Inflater, Deflater, etc.
  • JNI libraries and agent libraries, including the standard jdwp agent
  • improper bytecode instrumentation

要查找泄漏源,还可以使用jemalloc及其内置的

To find a source of the leak, you may also use jemalloc with its built-in profiling feature. However, jemalloc is not capable of unwinding Java stack traces.

async-profiler 可以显示混合的Java + native堆栈.尽管其主要目的是进行CPU和分配分析,但async-profiler还可帮助查找Java应用程序中的本机内存泄漏.

async-profiler can show mixed Java+native stacks. Although its primary purpose is CPU and Allocation profiling, async-profiler can also help to find native memory leaks in a Java application.

有关详细信息和更多示例,请参见我的 Java进程的内存占用情况演示文稿.

For details and more examples, see my Memory Footprint of a Java Process presentation.

这篇关于在JVM中增加常驻大小集的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆