JVM内存使用失控 [英] JVM memory usage out of control

查看:173
本文介绍了JVM内存使用失控的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个Tomcat webapp,代表客户执行一些漂亮的内存和CPU密集型任务。这是正常的,是所需的功能。但是,当我运行Tomcat时,内存使用量会随着时间的推移而猛增至4.0GB以上,此时我通常会杀死该进程,因为它会破坏我在开发计算机上运行的所有其他内容:





我以为我无意中用我的代码引入了内存泄漏,但在用VisualVM检查后,我看到了另一个故事:





VisualVM显示堆占用大约1 GB的RAM,这就是我设置它与 CATALINA_OPTS = - Xms256m -Xmx1024



为什么我的系统根据VisualVM的说法,这个过程占据了大量的内存,几乎没有占用任何内存?






之后进一步嗅探,我注意到如果多个工作正在运行同时在应用程序中,内存不会被释放。但是,如果我等待每个作业完成,然后再将 BlockingQueue 提交给 ExecutorService 服务,那么内存就是有效回收。我该怎么调试呢?为什么垃圾收集/内存重用会有所不同?

解决方案

你无法控制你想控制的东西 - Xmx 仅控制Java堆,它不控制JVM对本机内存的消耗,根据实现消耗完全不同。 VisualVM只显示了Heap正在消耗的内容,它没有显示整个JVM作为OS进程显示的本机内存。您将不得不使用操作系统级别的工具来查看,并且它们将报告完全不同的数字,通常比VisualVM报告的任何数字都大得多,因为JVM以完全不同的方式使用本机内存。 / p>

来自以下文章感谢内存(了解JVM如何在Windows和Linux上使用本机内存)



维护堆和垃圾收集器使用本机内存你可以无法控制。


需要更多的本机内存来维护维护Java堆的
内存管理系统的状态。
收集垃圾时,必须分配数据结构
来跟踪免费存储并记录进度。这些数据结构的确切大小和性质
因实现而异,但许多与堆中
的大小成比例。


和JIT编译器使用本机内存,就像 javac


字节码编译使用本机内存(与静态
编译器(如gcc)需要运行内存的方式相同),但输入(
字节码)和输出(可执行代码)来自JIT还必须将
存储在本机内存中。包含许多
JIT编译方法的Java应用程序使用比较小应用程序更多的本机内存。


然后你有了类加载器(s)使用本机内存


Java应用程序由定义对象结构
和方法逻辑的类组成。它们还使用Java运行时类
库(例如java.lang.String)中的类,并且可以使用第三方
库。这些类需要存储在内存中,只要它们被使用
。类的存储方式因实现而异。


我甚至不会开始引用Threads部分,我想你明白了
-Xmx 不控制你认为它控制的是什么,它控制JVM堆,而不是
进入JVM堆的所有东西,以及堆为你的
管理和簿记指定了更多的本机内存。



简单而简单的JVM使用的内存比 -Xms -Xmx 中提供的内存更多其他命令行参数。



这是关于JVM如何分配和管理内存的非常详细的文章,它并不像你在你的问题中的假设那样简单,它是非常值得全面阅读。



ThreadStack siz在许多实现中,e的最小限制因操作系统和JVM版本而异;如果将限制设置为低于JVM或OS的本机操作系统限制,则会忽略threadstack设置(有时必须设置ulix on * nix)。其他命令行选项以相同的方式工作,当提供的值太小时,默默地默认为更高的值。不要假设传入的所有值都代表实际使用的值。



类加载器和Tomcat有多个,占用大量内存而不是很容易记录。 JIT占用了大量的内存,随着时间的推移交易空间,这在大多数情况下是一个很好的交易。


I have a Tomcat webapp which does some pretty memory and CPU-intensive tasks on the behalf of clients. This is normal and is the desired functionality. However, when I run Tomcat, memory usage skyrockets over time to upwards of 4.0GB at which time I usually kill the process as it's messing with everything else running on my development machine:

I thought I had inadvertently introduced a memory leak with my code, but after checking into it with VisualVM, I'm seeing a different story:

VisualVM is showing the heap as taking up approximately a GB of RAM, which is what I set it to do with CATALINA_OPTS="-Xms256m -Xmx1024".

Why is my system seeing this process as taking up a ton of memory when according to VisualVM, it's taking up hardly any at all?


After a bit of further sniffing around, I'm noticing that if multiple jobs are running simultaneously in the applications, memory does not get freed. However, if I wait for each job to complete before submitting another to my BlockingQueue serviced by an ExecutorService, then memory is recycled effectively. How can I debug this? Why would garbage collection/memory reuse differ?

解决方案

You can't control what you want to control, -Xmx only controls the Java Heap, it doesn't control consumption of native memory by the JVM, which is consumed completely differently based on implementation. VisualVM is only showing you what the Heap is comsuming, it doesn't show what the entire JVM is consuming as native memory as an OS process. You will have to use OS level tools to see that, and they will report radically different numbers, usually much much larger than anything VisualVM reports, because the JVM uses up native memory in an entirely different way.

From the following article Thanks for the Memory ( Understanding How the JVM uses Native Memory on Windows and Linux )

Maintaining the heap and garbage collector use native memory you can't control.

More native memory is required to maintain the state of the memory-management system maintaining the Java heap. Data structures must be allocated to track free storage and record progress when collecting garbage. The exact size and nature of these data structures varies with implementation, but many are proportional to the size of the heap.

and the JIT compiler uses native memory just like javac would

Bytecode compilation uses native memory (in the same way that a static compiler such as gcc requires memory to run), but both the input (the bytecode) and the output (the executable code) from the JIT must also be stored in native memory. Java applications that contain many JIT-compiled methods use more native memory than smaller applications.

and then you have the classloader(s) which use native memory

Java applications are composed of classes that define object structure and method logic. They also use classes from the Java runtime class libraries (such as java.lang.String) and may use third-party libraries. These classes need to be stored in memory for as long as they are being used. How classes are stored varies by implementation.

I won't even start quoting the section on Threads, I think you get the idea that -Xmx doesn't control what you think it controls, it controls the JVM heap, not everything goes in the JVM heap, and the heap takes up way more native memory that what you specify for management and book keeping.

Plain and simple the JVM uses more memory than what is supplied in -Xms and -Xmx and the other command line parameters.

Here is a very detailed article on how the JVM allocates and manages memory, it isn't as simple as what you are expected based on your assumptions in your question, it is well worth a comprehensive read.

ThreadStack size in many implementations have minimum limits that vary by Operating System and sometimes JVM version; the threadstack setting is ignored if you set the limit below the native OS limit for the JVM or the OS ( ulimit on *nix has to be set instead sometimes ). Other command line options work the same way, silently defaulting to higher values when too small values are supplied. Don't assume that all the values passed in represent what are actually used.

The Classloaders, and Tomcat has more than one, eat up lots of memory that isn't documented easily. The JIT eats up a lot of memory, trading space for time, which is a good trade off most of the time.

这篇关于JVM内存使用失控的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆