我如何让G1打印更多日志细节? [英] How do I get G1 to print more log details?

查看:528
本文介绍了我如何让G1打印更多日志细节?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在测试一个基于Jetty的API和一个基于Netty的API。由于实验的唯一区别是我使用的是哪个API(相同的应用程序,相同的服务器,相同的内存配置,相同的负载等),我使用基于Netty的API获得更长的GC暂停。大多数情况下,暂停时间低于1毫秒,但在平稳运行几天后,每隔12-24小时我都会看到4-6秒的停顿,而这并没有显示出基于Jetty的API。



每当发生这种情况时,关于G1做了什么导致它发出STW的信息极少,请注意第二个暂停信息:

  2016-02-23T05:22:27.709 + 0000:66360.282:应用程序线程停止的总时间:0.0319639秒,停止线程花费:0.0000716秒
2016-02-23T05:22:35.642 + 0000:66368.215:应用程序线程停止的总时间:6.9705594秒,停止线程花费:0.0000737秒
2016-02-23T05:22:35.673+ 0000:66368.246:应用程序线程停止的总时间:0.0048374秒,正在停止线程:0.0040574秒

我的GC选项是:

  -XX:+ UseG1GC 
-XX:+ G1SummarizeConcMark
-XX :+ G1SummarizeRSetStats
-XX:+ PrintAdaptiveSizePolicy
-XX:+ PrintGC
-XX:+ PrintGCApplicationStoppedTime
-XX:+ PrintGCDateStamps
-XX:+ PrintGCDetails
-XX:+ PrintG CTimeStamp
-XX:+ DisableExplicitGC
-XX:InitialHeapSize = 12884901888
-XX:MaxHeapSize = 12884901888

另外,我的虚拟机选项是:

  -XX:+ AlwaysPreTouch 
-XX:+ DebugNonSafepoints
-XX:+ FlightRecorder
-XX:FlightRecorderOptions = stackdepth = 500
-XX:-OmitStackTraceInFastThrow
-XX:+ TrustFinalNonStaticFields
-XX:+ UnlockCommercialFeatures
-XX:+ UnlockDiagnosticVMOptions
-XX:+ UnlockExperimentalVMOptions
-XX:+ UseCompressedClassPointers
-XX:+ UseCompressedOops

我如何知道为什么 G1在 2016-02-23T05: 22:35.642

解决方案

为此添加一些闭包:问题在于这不是,从技术上说,是GC暂停;它是几个因素的组合:


  • AWS将IO限制在您已付款的范围内

  • / tmp在默认情况下结束在我们的(受限制的)EBS卷上

  • 默认情况下,JVM在stop-the-world(!)期间写入到/ tmp(!)



我们的应用程序的其他部分达到了EBS限制阈值,并且当JVM试图在STW期间写入/ tmp时,JVM上的所有线程都排在后面AWS的节流点。



看起来Netty / Jetty的差异是一个红鲱鱼。



我们需要我们的应用程序在这种环境中生存,所以我们的解决方案是禁用这种JVM行为,代价是失去了我们添加的几个JVM工具的支持:

  -XX:+ PerfDisableSharedMem 

有关此问题的详情,请参阅此优秀博文: http://www.evanjones。 ca / jvm-mmap-pause.html

I'm testing a Jetty-based API vs a Netty-based one. With the only difference in the experiment being which API I use (same application, same servers, same memory config, same load etc. etc.), I get longer GC pauses with the Netty-based one. Mostly, pauses are below a millisecond, but after a few days of running smoothly, every 12-24hours I'll see a 4-6 second pause that does not show up with the Jetty-based API.

Whenever this happens, there is extremely little information about what G1 was doing that caused it to issue a STW, note the second pause message here:

2016-02-23T05:22:27.709+0000: 66360.282: Total time for which application threads were stopped: 0.0319639 seconds, Stopping threads took: 0.0000716 seconds
2016-02-23T05:22:35.642+0000: 66368.215: Total time for which application threads were stopped: 6.9705594 seconds, Stopping threads took: 0.0000737 seconds
2016-02-23T05:22:35.673+0000: 66368.246: Total time for which application threads were stopped: 0.0048374 seconds, Stopping threads took: 0.0040574 seconds 

My GC options are:

-XX:+UseG1GC 
-XX:+G1SummarizeConcMark 
-XX:+G1SummarizeRSetStats 
-XX:+PrintAdaptiveSizePolicy 
-XX:+PrintGC 
-XX:+PrintGCApplicationStoppedTime 
-XX:+PrintGCDateStamps 
-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps 
-XX:+DisableExplicitGC 
-XX:InitialHeapSize=12884901888 
-XX:MaxHeapSize=12884901888 

And, for reference, my VM options are:

-XX:+AlwaysPreTouch 
-XX:+DebugNonSafepoints 
-XX:+FlightRecorder 
-XX:FlightRecorderOptions=stackdepth=500 
-XX:-OmitStackTraceInFastThrow 
-XX:+TrustFinalNonStaticFields 
-XX:+UnlockCommercialFeatures 
-XX:+UnlockDiagnosticVMOptions 
-XX:+UnlockExperimentalVMOptions 
-XX:+UseCompressedClassPointers 
-XX:+UseCompressedOops 

How do I find out why G1 stopped the world at 2016-02-23T05:22:35.642?

解决方案

To add some closure to this: The issue was that this was not, technically, a GC pause; it was a combination of several factors:

  • AWS throttles IO to what you've paid for
  • /tmp on Ubuntu by default ended up on our (throttled) EBS volume
  • the JVM by default writes to /tmp during stop-the-world(!)

Other parts of our application reached the EBS throttling threshold, and when the JVM tried to write to /tmp during a STW, all threads on the JVM became queued behind the AWS throttling point.

It seems the Netty/Jetty difference was a red herring.

We need our application to survive in this kind of environment, so our solution was to disable this JVM behavior, at the cost of loosing support from several JVM tools we added:

-XX:+PerfDisableSharedMem

More info on this issue from this excellent blog post: http://www.evanjones.ca/jvm-mmap-pause.html

这篇关于我如何让G1打印更多日志细节?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆