大型内存对象的JVM问题 [英] JVM issues with a large in-memory object

查看:93
本文介绍了大型内存对象的JVM问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个二进制文件,其中包含一个短字符串列表,该字符串在启动时加载并作为从字符串到protobuf(包含字符串..)的映射存储在内存中. (不理想,但由于遗留问题很难更改该设计) 最近,该列表已从约200万增加到约2000万,导致构建地图时失败.

I have a binary that contains a list of short strings which is loaded on startup and stored in memory as a map from string to protobuf (that contains the string..). (Not ideal, but hard to change that design due to legacy issues) Recently that list has grown from ~2M to ~20M entries causing it to fail when constructing the map.

首先我得到了OutOfMemoryError: Java heap space.

当我使用xms和xmx增加堆大小时,我们遇到了GC overhead limit exceeded.

When I increased the heap size using the xms and xmx we ran into GC overhead limit exceeded.

在具有15GB可用内存和以下JVM args的Linux 64位计算机上运行(我将RAM增加了10G-> 15G,并增加了堆标志6000M-> 9000M):

Runs on a Linux 64-bit machine with 15GB available memory and the following JVM args (I increased the RAM 10G->15G and the heap flags 6000M -> 9000M):

-Xms9000M -Xmx9000M -XX:PermSize=512m -XX:MaxPermSize=2018m

此二进制文件可以完成很多事情,并且可以处理实时流量,因此我无法承受它偶尔被卡住的情况.

This binary does a whole lot of things and is serving live traffic so I can't afford it being occasionally stuck.

我最终去做了一件显而易见的事情,那就是修复代码(从HashMap更改为ImmutableSet)并添加更多的RAM(-Xmx11000M).

I eventually went and did the obvious thing, which is fixing the code (change from HashMap to ImmutableSet) and adding more RAM (-Xmx11000M).

推荐答案

如果可能的话,我正在寻找一种临时解决方案,直到我们拥有一个更具扩展性的解决方案为止.

I'm looking for a temporary solution if that's possible until we have a more scalable one.

首先,您需要确定"OOME:超出了GC开销限制"是由于堆引起的:

First, you need to figure out if the "OOME: GC overhead limit exceeded" is due to the heap being:

  • 太小...导致JVM重复执行完整的GC,或者

  • too small ... causing the JVM to do repeated Full GCs, or

太大...在运行Full GC时导致JVM破坏虚拟内存.

too large ... causing the JVM to thrash the virtual memory when a Full GC is run.

您应该能够通过打开并检查GC日志,并使用OS级别的监视工具检查是否有过多的分页负载来区分这两种情况. (在检查分页级别时,还要检查问题是否不是由于JVM和另一个需要大量内存的应用程序之间争夺RAM引起的.)

You should be able to distinguish these two cases by turning on and examining the GC logs, and using OS-level monitoring tools to check for excessive paging loads. (When checking the paging levels, also check that the problem isn't due to competition for RAM between your JVM and another memory-hungry application.)

如果堆太小,请尝试将其增大.如果太大,请将其缩小.如果您的系统同时显示两种症状……那么您就遇到了大问题.

If the heap is too small, try making it bigger. If it is too big, make it smaller. If you system is showing both symptoms ... then you have a big problem.

还应该检查是否为JVM启用了压缩的操作",因为这将减少JVM的内存占用. -XshowSettings选项列出了JVM启动时有效的设置.如果压缩的oop被禁用,请使用-XX:+UseCompressedOops启用.

You should also check that "compressed oops" is enabled for your JVM, as that will reduce your JVM's memory footprint. The -XshowSettings option lists the settings in effect when the JVM starts. Use -XX:+UseCompressedOops to enable compressed oops if they are disabled.

(您可能会发现默认情况下启用了压缩oop,但这值得检查.这很容易解决...)

(You will probably find that compressed oops are enabled by default, but it is worth checking. This would be an easy fix ... )

如果上述方法均无效,那么您唯一的快速解决方案就是获取更多RAM.

If none of the above work, then your only quick fix is to get more RAM.

但是显然,真正的解决方案是重新设计代码,以使您不需要庞大的(并且随着时间的推移而增加)内存数据结构.

But obviously, the real solution is to reengineer the code so that you don't need a huge (and increasing over time) in-memory data structure.

这篇关于大型内存对象的JVM问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆