java堆内存管理内存不足 [英] java heap memory management insufficient memory

查看:421
本文介绍了java堆内存管理内存不足的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在Linux上运行netty异步服务器和客户机项目时,它将耗尽所有可用的内存,如下所示:

When a netty async server-and-client project is running on linux, it runs out all available memories, like this:

所以我在Windows上运行它,JMC像这样显示堆:

So I run it on windows, and JMC show heap like this:

我的问题是:为什么Windows和linux的行为不同,是否可以在某处配置linux jvm以释放堆内存?为什么在Windows(GC)中有堆释放?如何找出占用大量内存的可疑代码?

My questions are: why windows and linux behaves differently, is there somewhere I could configure linux jvm to have a heap memory release? And why there is a heap release in windows (GC)? How to find out suspicious piece of code that takes up so much memory?

编辑:linux是4G,windows是8G,但我认为绝对值不会导致运行结果差异. Project不会直接处理原始bytebuff,而是将HttpServerCodecHttpObjectAggregator用于bytebuf.在linux中运行的命令是java -jar xx.jar.我不仅想知道为什么为什么会有差异,为什么要锯齿,还想知道如何找到占用大量内存的那个. JMC显示了另一个数字,我不知道为什么线程可以具有如此高的块数. netty线程IO具有99LINE 71ms.

EDIT: linux is 4G, windows is 8G, but I don't think the absolute value causes the running result differences. Project does not directly handle raw bytebuff, it uses HttpServerCodec and HttpObjectAggregator for bytebuf. The command to run in linux is java -jar xx.jar. I would like to know not only why difference, why sawtooth but also how to locate the one that takes up so much memory. JMC shows another figure, and I don't know why a thread can have such a high block number. netty threads IO have a 99LINE 71ms.

更新: 现在,我想找到代码的哪一部分占用了很大的内存. JMC堆显示EDEN SPACE很高,我对其进行了配对,发现EDEN SPACE适用于new对象.最初,该项目使用spring-boot,该容器具有tomcat servlet 3.0作为容器和apache httpclient池作为客户端,现在仅使用netty异步服务器和netty异步客户端更改了这些部分,而其余部分保留了下来(仍然使用spring用于豆管理). Netty服务器和客户端处理程序对于所有请求都是共享的(处理程序是单例spring Bean).有了这么小的更改,我不相信new对象的数量会显着增加,以结束于1.35G内​​存

UPDATED: Now I would like to locate which part of the code takes up so much memory. JMC heap shows EDEN SPACE is very high, and I bing it and found out EDEN SPACE is for new object. Originally, the project used spring-boot which has tomcat servlet 3.0 as container and a apache httpclient pool for client, now only these parts has been changed by using netty asynchronous server and netty asynchronous client, while other parts are remained (still use spring for bean management). Netty server and client handlers are shared for all requests (handlers are singleton spring beans). With such small changes, I don't believe the amount of new objects are significantly increased that it ends in 1.35G memory

更新分别运行netty和springboot项目后,我得到了更多统计数据:

UPDATEAfter running netty and springboot projects separately, I get more statistical data:

  1. OS内存8G. springboot版项目:PS Old Generation:容量= 195MB;已用= 47MB;使用了24%.它有692,971个对象,总大小为41,848,384.
  2. OS内存16G.网络版本项目:PS Old Generation:容量= 488MB;已用327MB;使用了67%.它有1,243,432个对象,总大小为221,427,824.

网络版本:堆转储显示它具有279,255个类io.netty.buffer.PoolSubpage的实例,而第二个最多的7,222个类org.springframework.core.MethodClassKey的实例.这两个版本的服务对象(我们自己的类)都受到限制,最多不能超过3000个.

netty version: heap dump shows it has a 279,255 instances of class io.netty.buffer.PoolSubpage compared to the 2nd most 7,222 instances of class org.springframework.core.MethodClassKey. Both versions have service (our own class) objects limited, no more than 3000.

我尝试在4G内存linux上使用-Xmx1024m运行,仍然会导致同样的内存不足问题.

I have tried to run with -Xmx1024m on 4G memory linux, still causes the same out of memory problem.

推荐答案

您在Windows上看到的行为是正常的GC行为.该应用程序正在生成垃圾,然后您达到导致GC运行的阈值. GC释放了大量堆.然后,应用程序再次启动.结果是堆占用率呈锯齿状.

The behavior you are seeing on Windows is normal GC behaviour. The application is generating garbage, and then you hit a threshold that causes the GC to run. The GC frees a lot of heap. And then the application starts again. The result is a sawtooth pattern in the heap occupancy.

这是正常现象.每个JVM的行为或多或少都是这样.

This is normal. Every JVM behaves more or less like this.

Linux上的行为似乎是试图在本机内存中分配较大的内存(77MB),但由于操作系统拒绝为JVM提供那么多的内存而失败.通常,这是由于操作系统资源不足而发生的.例如物理RAM,交换空间等.

The behavior on Linux looks like something is trying to allocate something large (77MB) in native memory, and failing because the OS is refusing to give the JVM that much memory. Typically that happens because the OS has run out of resources; e.g. physical RAM, swap space, etc.

Windows 8G,Linux 4G.

Windows 8G, linux 4G.

这可能解释了这一点.您的Linux系统仅具有Windows系统物理内存的一半.如果您正在使用大型Java堆运行netty,并且尚未为Linux操作系统配置任何交换空间,那么JVM正在使用所有可用的虚拟内存是合理的.甚至可能在JVM启动时发生.

That probably explains it. Your Linux system has only half the physical memory of the Windows system. If you are running netty with a large Java heap AND your Linux OS has not been configured with any swap space, then it is plausible that the JVM is using all of the available virtual memory. It could even be happening at JVM startup.

(如果我们假设Windows和Linux的最大堆大小都设置为相同,那么在Windows上至少有4.5GB的虚拟地址空间可用于其他操作.在Linux上,只有0.5GB. 0.5GB必须容纳所有非堆JVM利用率……以及操作系统和各种其他用户空间进程.很容易看出您是如何使用所有这些……导致分配失败的.)

(If we assume that the max heap size has been set the same for both Windows and Linux, then on Windows there is at least 4.5GB of virtual address space available for other things. On Linux, only 0.5GB. And that 0.5GB has to hold all of the non-heap JVM utilization ... plus the OS and various other user-space processes. It is easy to see how you could have used all of that ... leading to the allocation failure.)

如果我的理论正确,那么解决方案将是更改JVM命令行选项以使-Xmx较小.

If my theory is correct, then the solution would be to change the JVM command line options to make -Xmx smaller.

(或增加可用的物理/虚拟内存.但是要小心,通过增加交换空间来增加虚拟内存.如果虚拟/物理比率太大,则会使虚拟内存崩溃",从而导致糟糕的性能. )

(Or increase the available physical / virtual memory. But be careful with increasing the virtual memory by adding swap space. If the virtual/physical ratio is too large you can get virtual memory "thrashing" which can lead to terrible performance.)

这篇关于java堆内存管理内存不足的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆