用于分析大型Java堆转储的工具 [英] Tool for analyzing large Java heap dumps

查看:120
本文介绍了用于分析大型Java堆转储的工具的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个我想分析的HotSpot JVM堆转储。 VM以 -Xmx31g 运行,堆转储文件大48 GB。

I have a HotSpot JVM heap dump that I would like to analyze. The VM ran with -Xmx31g, and the heap dump file is 48 GB large.


  • 我甚至不会尝试 jhat ,因为它需要大约五倍的堆内存(在我的情况下是240 GB)并且非常慢。

  • 在分析堆转储几个小时后,Eclipse MAT与 ArrayIndexOutOfBoundsException 崩溃。

  • I won't even try jhat, as it requires about five times the heap memory (that would be 240 GB in my case) and is awfully slow.
  • Eclipse MAT crashes with an ArrayIndexOutOfBoundsException after analyzing the heap dump for several hours.

该任务还有哪些其他工具可用?一套命令行工具是最好的,包括一个程序,它将堆转储转换为高效的数据结构进行分析,并结合其他几个处理预结构化数据的工具。

What other tools are available for that task? A suite of command line tools would be best, consisting of one program that transforms the heap dump into efficient data structures for analysis, combined with several other tools that work on the pre-structured data.

推荐答案

通常,我使用的是 ParseHeapDump.sh 。 eclipse.org/mat/\"rel =noreferrer> Eclipse Memory Analyzer 并描述了这里,我把它做到了我们更强化的服务器上(下载并通过linux .zip发行版复制,解压缩)。 shell脚本比从GUI解析堆需要更少的资源,而且你可以在更强大的资源上运行它(你可以通过添加类似 -vmargs -Xmx40g -XX的东西来分配更多资源: - 使用GCOverheadLimit 到脚本最后一行的末尾。
例如,修改后该文件的最后一行可能如下所示

Normally, what I use is ParseHeapDump.sh included within Eclipse Memory Analyzer and described here, and I do that onto one our more beefed up servers (download and copy over the linux .zip distro, unzip there). The shell script needs less resources than parsing the heap from the GUI, plus you can run it on your beefy server with more resources (you can allocate more resources by adding something like -vmargs -Xmx40g -XX:-UseGCOverheadLimit to the end of the last line of the script. For instance, the last line of that file might look like this after modification

./MemoryAnalyzer -consolelog -application org.eclipse.mat.api.parse "$@" -vmargs -Xmx40g -XX:-UseGCOverheadLimit

./ path / to / ParseHeapDump.sh ../today_heap_dump/jvm.hprof

成功后,它会在.hprof文件旁边创建一些索引文件。

After that succeeds, it creates a number of "index" files next to the .hprof file.

在创建索引之后,我尝试从中生成报告并将这些报告scp到我的本地计算机并尝试查看是否可以找到罪魁祸首(不仅仅是报告,而不是索引)。这是一个教程在创建报告

After creating the indices, I try to generate reports from that and scp those reports to my local machines and try to see if I can find the culprit just by that (not just the reports, not the indices). Here's a tutorial on creating the reports.

示例报告:

./ParseHeapDump.sh ../today_heap_dump/jvm.hprof org.eclipse.mat.api:suspects

其他报告选项:

org.eclipse.mat.api:overview org.eclipse.mat.api:top_components

如果这些报告不够,我还需要更多的挖掘(即让我们说通过oql),我将索引以及hprof文件scp到我的本地机器,然后使用我的Eclipse MAT GUI打开堆转储(索引与堆转储在同一目录中)。从那里开始,它不需要太多的内存来运行。

If those reports are not enough and if I need some more digging (i.e. let's say via oql), I scp the indices as well as hprof file to my local machine, and then open the heap dump (with the indices in the same directory as the heap dump) with my Eclipse MAT GUI. From there, it does not need too much memory to run.

编辑:
我只想添加两个音符:

I just liked to add two notes :


  • 据我所知,只有索引的生成是Eclipse MAT的内存密集型部分。获得索引之后,Eclipse MAT中的大多数处理都不需要那么多内存。

  • 在shell脚本上执行此操作意味着我可以在无头服务器上执行此操作(我通常也会在无头服务器上执行此操作,因为它们通常是最强大的服务器)。如果你有一个可以生成这个大小的堆转储的服务器,那么你可能还有另一个可以处理大量堆转储的服务器。

这篇关于用于分析大型Java堆转储的工具的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆