Docker容器 - JVM内存峰值 - 竞技场块内存空间 [英] Docker Container - JVM Memory Spike - Arena Chunk Memory Space

查看:175
本文介绍了Docker容器 - JVM内存峰值 - 竞技场块内存空间的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在针对在ECS / EC2 / Docker / Centos7 / Tomcat / OpenJDK8环境中运行的Java Web应用程序进行性能测试时,观察到JVM内存中的大量离散峰值。



性能测试非常简单,它包括对位于由Elastic Container Service管理的EC2主机上运行的一对Docker容器前面的AWS Application Load Balancer的连续并发请求。通常,并发级别是30个并发负载测试客户端连接/线程。在几分钟内,其中一个Docker容器通常会受到影响。



内存峰值似乎在非堆内存中。具体来说,内存峰值似乎与 Arena Chunk 内存空间有关。将没有经历过尖峰的JVM的内存占用比较时, Thread Arena Chunk 记忆空间脱颖而出。



以下是 jcmd 实用程序指南/疑难解答/ tooldescr007.htmlrel =nofollow noreferrer> VM内存



注意 Arena Chunk 内存的荒谬数字以及 Thread 内存的相对较高的数字。 / p>

测试的并发级别可以立即创建对Tomcat请求线程池中的线程的需求。但是,峰值并不总是出现在第一波请求中。



你见过类似的东西吗?你知道是什么导致了尖峰吗?






Docker Stats



Memory Spike Container:

  Mon Oct 9 00:31:45 UTC 2017 
89440337e936 27.36 %530 MiB / 2.93 GiB 17.67%15.6 MB / 24.1 MB 122 MB / 2.13 MB 0
Mon Oct 9 00:31:48 UTC 2017
89440337e936 114.13%2.059 GiB / 2.93 GiB 70.29%16.3 MB / 25.1 MB 122 MB / 2.13 MB 0

普通容器:

 周一10月9日00:53:41 UTC 2017 
725c23df2562 0.08%533.4 MiB / 2.93 GiB 17.78%5 MB / 8.15 MB 122 MB / 29.3 MB 0
星期一10月9日00:53:44 UTC 2017
725c23df2562 0.07%533.4 MiB / 2.93 GiB 17.78%5 MB / 8.15 MB 122 MB / 29.3 MB 0






VM内存



内存峰值JVM:

  #jcmd 393 VM.native_memory summary 
393:

本机内存跟踪:

总计:保留= 1974870KB,承诺= 713022KB
- Java堆(保留= 524288KB,承诺= 524288KB)
(mmap:保留= 524288KB,承诺= 524288KB)

- 类(保留= 1096982KB,已提交= 53466KB)
(类#8938)
(malloc = 1302KB#14768)
(mmap:保留= 1095680KB,已提交= 52164KB)

- 线程(保留= 8423906KB,已提交= 8423906KB)
(线程#35)
(堆栈:保留= 34952KB,已提交= 34952KB)
(malloc = 114KB#175)
(竞技场= 8388840KB#68)

- 代码(保留= 255923KB,承诺= 37591KB)
(malloc = 6323KB#8486)
(mmap:保留= 249600KB,承诺= 31268KB)

- GC(保留= 6321KB,承诺= 6321KB)
(malloc = 4601KB# 311)
(mmap:保留= 1720KB,承诺= 1720KB)

- 编译器(保留= 223KB,承诺= 223KB)
(malloc = 93KB#276)
(竞技场= 131KB#3)

- 内部(保留= 2178KB,承诺= 2178KB)
(malloc = 2146KB#11517)
(mmap:保留= 32KB,已提交= 32KB)

- 符号(保留= 13183KB,承诺= 13183KB)
(malloc = 9244KB#85774)
(竞技场= 3940KB#1)

- 本机内存跟踪(保留= 1908KB,已提交= 1908KB)
(malloc = 8KB #95)
(跟踪间接费用= 1900KB)

- 竞技场块(保留= 18014398501093554KB,承诺= 18014398501093554KB)
(malloc = 18014398501093554KB)

- 未知(保留= 38388KB,承诺= 38388KB)
(mmap:保留= 38388KB,承诺= 38388KB)

普通JVM:

  #jcmd 391 VM.native_memory summary 
391:

本机内存跟踪:

总计:保留= 1974001KB,已提交= 710797KB
- Java堆(保留= 524288KB,已提交= 524288KB)
(mmap:保留= 524288KB) ,承诺= 524288KB)

- 类(保留= 1096918KB,承诺= 53738KB)
(类#9005)
(malloc = 1238KB#13654)
(mmap :reserved = 1095680KB,commi tted = 52500KB)

- 线程(保留= 35234KB,承诺= 35234KB)
(线程#35)
(堆栈:保留= 34952KB,承诺= 34952KB)
(malloc = 114KB#175)
(竞技场= 168KB#68)

- 代码(保留= 255261KB,承诺= 35237KB)
(malloc = 5661KB#8190)
(mmap:保留= 249600KB,承诺= 29576KB)

- GC(保留= 6321KB,承诺= 6321KB)
(malloc = 4601KB#319)
(mmap :reserved = 1720KB,committed = 1720KB)

- 编译器(保留= 226KB,承诺= 226KB)
(malloc = 96KB#317)
(竞技场= 131KB#3)

- 内部(保留= 2136KB,承诺= 2136KB)
(malloc = 2104KB#11715)
(mmap:保留= 32KB,承诺= 32KB)

- 符号(保留= 13160KB,承诺= 13160KB)
(malloc = 9221KB# 85798)
(竞技场= 3940KB#1)

- 本机内存跟踪(保留= 1890KB,承诺= 1890KB)
(malloc = 8KB#95)
(跟踪开销= 1882KB)

- 竞技场块(保留= 178KB,承诺= 178KB)
(malloc = 178KB)

- 未知(保留= 38388KB,已提交= 38388KB)
(mmap:保留= 38388KB,承诺= 38388KB)


解决方案

glibc / malloc选项似乎解决了这个 MALLOC_PER_T HREAD = 0 。但是,我决定使用 debian / openjdk docker base image 而不是centos和这也解决了这个问题。


I'm observing a large, discrete spike in JVM memory during performance tests against my Java web application running in an ECS/EC2/Docker/Centos7/Tomcat/OpenJDK8 environment.

The performance test is quite simple and it consists of continuous concurrent requests to an AWS Application Load Balancer sitting in front of a pair of Docker containers running on EC2 hosts managed by Elastic Container Service. Typically the concurrency level is 30 simultaneous load test client connections/threads. Within a few minutes, one of the Docker containers is usually afflicted.

The memory spike appears to be in non-heap memory. Specifically, the memory spike seems to be related to the Arena Chunk memory space. When comparing the memory footprint of a JVM that hasn't experienced the spike with one that has, the Thread and Arena Chunk memory spaces stand out.

Below is a comparison of the VM internal memory using the jcmd utility.

Notice the absurdly high number for Arena Chunk memory and the comparatively high number for Thread memory.

The concurrency level of the test can create an immediate demand for threads in the Tomcat request thread pool. However, the spike doesn't always occur in the first wave of requests.

Have you seen something similar? Do you know what is causing the spike?


Docker Stats

Memory Spike Container:

Mon Oct  9 00:31:45 UTC 2017
89440337e936        27.36%              530 MiB / 2.93 GiB      17.67%              15.6 MB / 24.1 MB   122 MB / 2.13 MB    0
Mon Oct  9 00:31:48 UTC 2017
89440337e936        114.13%             2.059 GiB / 2.93 GiB    70.29%              16.3 MB / 25.1 MB   122 MB / 2.13 MB    0

Normal Container:

Mon Oct  9 00:53:41 UTC 2017
725c23df2562        0.08%               533.4 MiB / 2.93 GiB   17.78%              5 MB / 8.15 MB      122 MB / 29.3 MB    0
Mon Oct  9 00:53:44 UTC 2017
725c23df2562        0.07%               533.4 MiB / 2.93 GiB   17.78%              5 MB / 8.15 MB      122 MB / 29.3 MB    0


VM Internal Memory

Memory Spike JVM:

# jcmd 393 VM.native_memory summary
393:

Native Memory Tracking:

Total: reserved=1974870KB, committed=713022KB
-                 Java Heap (reserved=524288KB, committed=524288KB)
                            (mmap: reserved=524288KB, committed=524288KB) 

-                     Class (reserved=1096982KB, committed=53466KB)
                            (classes #8938)
                            (malloc=1302KB #14768) 
                            (mmap: reserved=1095680KB, committed=52164KB) 

-                    Thread (reserved=8423906KB, committed=8423906KB)
                            (thread #35)
                            (stack: reserved=34952KB, committed=34952KB)
                            (malloc=114KB #175) 
                            (arena=8388840KB #68)

-                      Code (reserved=255923KB, committed=37591KB)
                            (malloc=6323KB #8486) 
                            (mmap: reserved=249600KB, committed=31268KB) 

-                        GC (reserved=6321KB, committed=6321KB)
                            (malloc=4601KB #311) 
                            (mmap: reserved=1720KB, committed=1720KB) 

-                  Compiler (reserved=223KB, committed=223KB)
                            (malloc=93KB #276) 
                            (arena=131KB #3)

-                  Internal (reserved=2178KB, committed=2178KB)
                            (malloc=2146KB #11517) 
                            (mmap: reserved=32KB, committed=32KB) 

-                    Symbol (reserved=13183KB, committed=13183KB)
                            (malloc=9244KB #85774) 
                            (arena=3940KB #1)

-    Native Memory Tracking (reserved=1908KB, committed=1908KB)
                            (malloc=8KB #95) 
                            (tracking overhead=1900KB)

-               Arena Chunk (reserved=18014398501093554KB, committed=18014398501093554KB)
                            (malloc=18014398501093554KB) 

-                   Unknown (reserved=38388KB, committed=38388KB)
                            (mmap: reserved=38388KB, committed=38388KB) 

Normal JVM:

# jcmd 391 VM.native_memory summary
391:

Native Memory Tracking:

Total: reserved=1974001KB, committed=710797KB
-                 Java Heap (reserved=524288KB, committed=524288KB)
                            (mmap: reserved=524288KB, committed=524288KB) 

-                     Class (reserved=1096918KB, committed=53738KB)
                            (classes #9005)
                            (malloc=1238KB #13654) 
                            (mmap: reserved=1095680KB, committed=52500KB) 

-                    Thread (reserved=35234KB, committed=35234KB)
                            (thread #35)
                            (stack: reserved=34952KB, committed=34952KB)
                            (malloc=114KB #175) 
                            (arena=168KB #68)

-                      Code (reserved=255261KB, committed=35237KB)
                            (malloc=5661KB #8190) 
                            (mmap: reserved=249600KB, committed=29576KB) 

-                        GC (reserved=6321KB, committed=6321KB)
                            (malloc=4601KB #319) 
                            (mmap: reserved=1720KB, committed=1720KB) 

-                  Compiler (reserved=226KB, committed=226KB)
                            (malloc=96KB #317) 
                            (arena=131KB #3)

-                  Internal (reserved=2136KB, committed=2136KB)
                            (malloc=2104KB #11715) 
                            (mmap: reserved=32KB, committed=32KB) 

-                    Symbol (reserved=13160KB, committed=13160KB)
                            (malloc=9221KB #85798) 
                            (arena=3940KB #1)

-    Native Memory Tracking (reserved=1890KB, committed=1890KB)
                            (malloc=8KB #95) 
                            (tracking overhead=1882KB)

-               Arena Chunk (reserved=178KB, committed=178KB)
                            (malloc=178KB) 

-                   Unknown (reserved=38388KB, committed=38388KB)
                            (mmap: reserved=38388KB, committed=38388KB) 

解决方案

A glibc/malloc option seems to fix this MALLOC_PER_THREAD=0. However, I decided to use the debian/openjdk docker base image rather than centos and that also fixed the issue.

这篇关于Docker容器 - JVM内存峰值 - 竞技场块内存空间的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆