驻留集大小(RSS)与在Docker容器中运行的JVM的Java总提交内存(NMT)之间的差异 [英] Difference between Resident Set Size (RSS) and Java total committed memory (NMT) for a JVM running in Docker container

查看:935
本文介绍了驻留集大小(RSS)与在Docker容器中运行的JVM的Java总提交内存(NMT)之间的差异的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

场景:



我在Docker容器中运行了一个JVM。我使用两个工具进行了一些内存分析:1)顶部 2) Java本机内存跟踪。这些数字看起来令人困惑,我试图找出导致差异的最新情况。



问题:



对于Java进程,RSS报告为1272MB总Java内存报告为790.55 MB。我该怎么解释内存的其余部分1272 - 790.55 = 481.44 MB去哪了?



为什么我要在查看



Java NMT





Docker内存统计信息





图表



我有一个运行时间超过48小时的泊坞容器。现在,当我看到一个包含以下内容的图表时:


  1. 给予docker容器的总内存= 2 GB

  2. Java Max Heap = 1 GB

  3. 提交总数(JVM)=始终小于800 MB

  4. 堆使用(JVM)=总是少于超过200 MB

  5. 使用的非堆(JVM)=总是小于100 MB。

  6. RSS =大约1.1 GB。

那么,在1.1 GB(RSS)和800 MB(Java Total committed memory)之间吃内存是什么?



解决方案

你有一些线索
分析Docker容器中的java内存使用情况
来自 Mikhail Krestjaninoff




R esident S et S ize是进程当前分配和使用的物理内存量(没有换掉页面)。它包括代码,数据和共享库(在使用它们的每个进程中计算)



为什么docker stats info与ps数据不同?



第一个问题的答案非常简单 - Docker有一个错误(或功能 - 取决于您的心情):它包括文件缓存到总内存使用信息。因此,我们可以避免使用此指标并使用 ps 有关RSS的信息。



好吧,好吧 - 但是为什么RSS高于Xmx?



理论上,如果是java应用程序




  RSS =堆大小+ MetaSpace + OffHeap大小




其中OffHeap由线程堆栈,直接缓冲区,映射文件(库和jar)和JVM代码组成



自< a href =https://docs.oracle.com/javase/8/docs/technotes/guides/vm/enhancements-8.html\"rel =noreferrer> JDK 1.8.40 我们 本机内存跟踪器



如您所见,我已经将 -XX:NativeMemoryTracking = summary 属性添加到JVM中,因此我们可以从命令行调用它:




  docker exec my-app jcmd 1 VM.native_memory summary 

(这就是OP所做的事情)


不要担心未知部分 - 似乎NMT是一个不成熟的工具,可以'处理CMS GC(当您使用另一个GC时,此部分将消失)。



请记住, NMT显示已提交内存,而非常驻 (你通过ps命令获得)。换句话说,可以提交内存页面而不考虑作为常驻(直到它直接访问)



这意味着非堆区域的NMT结果(堆总是预初始化的)可能比RSS值大


(这是为什么JVM报告更多提交的地方内存比linux进程驻留集大小?进来


结果,尽管我们设置了jvm堆限制为256m,我们的应用程序消耗367M。 其他164M主要用于存储类元数据,编译代码,线程和GC数据。



前三个点通常是应用程序的常量,因此唯一随堆大小增加的是GC数据。

这种依赖是线性的,但 k 系数( y = kx + b )远小于1。







更一般地说,这似乎是 issue 15020 报告了自docker 1.7以来的类似问题


<我正在运行一个简单的Scala(JVM)应用程序,它将大量数据加载到内存中。

我将JVM设置为8G堆( -Xmx8G )。我有一台132G内存的机器,它不能处理超过7-8个容器,因为它们超过了我对JVM施加的8G限制。


docker stat 报告为误导,因为它显然包含文件缓存到总内存使用信息中)


docker stat 表明每个容器本身使用的内存比JVM应该使用的内存多得多。例如:




  CONTAINER CPU%MEM USAGE / LIMIT MEM%NET I / O 
dave-1 3.55%10.61 GB / 135.3 GB 7.85%7.132 MB / 959.9 MB
perf-1 3.63%16.51 GB / 135.3 GB 12.21%30.71 MB / 5.115 GB




似乎JVM正在向操作系统询问内存,该内存是在容器内分配的,当GC运行时,JVM释放内存,但容器不会将内存释放回主操作系统。所以...内存泄漏。



Scenario:

I have a JVM running in a docker container. I did some memory analysis using two tools: 1) top 2) Java Native Memory Tracking. The numbers look confusing and I am trying to find whats causing the differences.

Question:

The RSS is reported as 1272MB for the Java process and the Total Java Memory is reported as 790.55 MB. How can I explain where did the rest of the memory 1272 - 790.55 = 481.44 MB go?

Why I want to keep this issue open even after looking at this question on SO:

I did see the answer and the explanation makes sense. However, after getting output from Java NMT and pmap -x , I am still not able to concretely map which java memory addresses are actually resident and physically mapped. I need some concrete explanation (with detailed steps) to find whats causing this difference between RSS and Java Total committed memory.

Top Output

Java NMT

Docker memory stats

Graphs

I have a docker container running for most than 48 hours. Now, when I see a graph which contains:

  1. Total memory given to the docker container = 2 GB
  2. Java Max Heap = 1 GB
  3. Total committed (JVM) = always less than 800 MB
  4. Heap Used (JVM) = always less than 200 MB
  5. Non Heap Used (JVM) = always less than 100 MB.
  6. RSS = around 1.1 GB.

So, whats eating the memory between 1.1 GB (RSS) and 800 MB (Java Total committed memory)?

解决方案

You have some clue in " Analyzing java memory usage in a Docker container" from Mikhail Krestjaninoff:

Resident Set Size is the amount of physical memory currently allocated and used by a process (without swapped out pages). It includes the code, data and shared libraries (which are counted in every process which uses them)

Why does docker stats info differ from the ps data?

Answer for the first question is very simple - Docker has a bug (or a feature - depends on your mood): it includes file caches into the total memory usage info. So, we can just avoid this metric and use ps info about RSS.

Well, ok - but why is RSS higher than Xmx?

Theoretically, in case of a java application

RSS = Heap size + MetaSpace + OffHeap size

where OffHeap consists of thread stacks, direct buffers, mapped files (libraries and jars) and JVM code itse

Since JDK 1.8.40 we have Native Memory Tracker!

As you can see, I’ve already added -XX:NativeMemoryTracking=summary property to the JVM, so we can just invoke it from the command line:

docker exec my-app jcmd 1 VM.native_memory summary

(This is what the OP did)

Don’t worry about the "Unknown" section - seems that NMT is an immature tool and can’t deal with CMS GC (this section disappears when you use an another GC).

Keep in mind, that NMT displays "committed" memory, not "resident" (which you get through the ps command). In other words, a memory page can be committed without considering as a resident (until it directly accessed).

That means that NMT results for non-heap areas (heap is always preinitialized) might be bigger than RSS values.

(that is where "Why does a JVM report more committed memory than the linux process resident set size?" comes in)

As a result, despite the fact that we set the jvm heap limit to 256m, our application consumes 367M. The "other" 164M are mostly used for storing class metadata, compiled code, threads and GC data.

First three points are often constants for an application, so the only thing which increases with the heap size is GC data.
This dependency is linear, but the "k" coefficient (y = kx + b) is much less then 1.


More generally, this seems to be followed by issue 15020 which reports a similar issue since docker 1.7

I'm running a simple Scala (JVM) application which loads a lot of data into and out of memory.
I set the JVM to 8G heap (-Xmx8G). I have a machine with 132G memory, and it can't handle more than 7-8 containers because they grow well past the 8G limit I imposed on the JVM.

(docker stat was reported as misleading before, as it apparently includes file caches into the total memory usage info)

docker stat shows that each container itself is using much more memory than the JVM is supposed to be using. For instance:

CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O
dave-1 3.55% 10.61 GB/135.3 GB 7.85% 7.132 MB/959.9 MB
perf-1 3.63% 16.51 GB/135.3 GB 12.21% 30.71 MB/5.115 GB

It almost seems that the JVM is asking the OS for memory, which is allocated within the container, and the JVM is freeing memory as its GC runs, but the container doesn't release the memory back to the main OS. So... memory leak.

这篇关于驻留集大小(RSS)与在Docker容器中运行的JVM的Java总提交内存(NMT)之间的差异的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆