Java阻塞问题:为什么JVM块线程在许多不同的类/方法? [英] Java blocking issue: Why would JVM block threads in many different classes/methods?

查看:320
本文介绍了Java阻塞问题:为什么JVM块线程在许多不同的类/方法?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

更新:这看起来像是内存问题。 3.8 Gb Hprof文件指示当发生此阻塞时,JVM正在转储其堆。我们的操作团队发现网站没有响应,采取堆栈跟踪,然后关闭实例。我相信他们在堆转储完成之前关闭了网站。日志具有错误/异常/问题的证据 - 可能是因为JVM在生成错误消息之前已被杀死。



原问题
我们有一个最近的情况,应用程序出现 - 到最终用户 - 挂。我们在应用程序重新启动之前得到了一个堆栈跟踪,我发现了一些令人惊讶的结果:527个线程,463个线程状态为BLOCKED。



过去
在过去被阻止的线程通常有这个问题:
1)一些明显的瓶颈:一些数据库记录锁或文件系统锁问题导致其他线程等待。
2)所有被阻止的线程会阻塞同一个类/方法(例如jdbc或文件系统clases)



异常数据
在这种情况下,除了应用程序类(包括jdbc和lucene调用)之外,我还看到各种类/方法被阻塞,包括jvm内部类,jboss类,log4j等。



问题
会导致JVM阻止log4j.Hierarchy.getLogger,java.lang.reflect.Constructor.newInstance的问题?显然一些资源很少,但是哪个资源?



感谢





堆栈跟踪节选

  http-0.0.0.0-80- 417daemon prio = 6 tid = 0x000000000f6f1800 nid = 0x1a00等待监视器输入[0x000000002dd5d000] 
java.lang.Thread.State:BLOCKED(在对象监视器上)
在sun.reflect.GeneratedConstructorAccessor68.newInstance未知源)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
在java。 lang.Class.newInstance0(Class.java:355)
在java.lang.Class.newInstance(Class.java:308)
在org.jboss.ejb.Container.createBeanClassInstance(Container.java: 630)

http-0.0.0.0-80-451daemon prio = 6 tid = 0x000000000f184800 nid = 0x14d4等待监视器输入[0x000000003843d000]
java.lang.Thread.State:BLOCKED (在对象监视器上)
at java.lang.Class.getDeclaredMethods0(Native方法)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2427)
at java.lang.Class .getMethod0(Class.java:2670)

http-0.0.0.0-80-449daemon prio = 6 tid = 0x000000000f17d000 nid = 0x2240等待监视器输入[0x000000002fa5f000]
java .lang.Thread.State:BLOCKED(on object monitor)
at org.apache.coyote.http11.Http11Protocol $ Http11ConnectionHandler.register(Http11Protocol.java:638)
- 等待锁定< 0x00000007067515e8> (org.apache.coyote.http11.Http11Protocol $ Http11ConnectionHandler)
at org.apache.coyote.http11.Http11Protocol $ Http11ConnectionHandler.createProcessor(Http11Protocol.java:630)


http-0.0.0.0-80-439daemon prio = 6 tid = 0x000000000f701800 nid = 0x1ed8等待监视器输入[0x000000002f35b000]
java.lang.Thread.State:BLOCKED(在对象监视器上)
在org.apache.log4j.Hierarchy.getLogger(Hierarchy.java:261)
在org.apache.log4j.Hierarchy.getLogger(Hierarchy.java:242)
在org.apache.log4j。 LogManager.getLogger(LogManager.java:198)


解决方案

根据收集到的证据大致列出:




  • 您看过 GC行为

    强>?你在记忆压力下吗?这可能会导致 newInstance()和其他几个被阻止。使用 -XX:+ PrintGCDetails -XX:+ PrintGCTimeStamps -verbose:gc 运行您的VM并记录输出。在故障/锁定时间附近你看到过多的GC时间吗?

    • 条件可重复?如果是这样,请尝试使用JVM(-Xmx)中的不同堆大小,并查看行为是否发生显着更改。如果是这样,请查找内存泄漏或为您的应用程序正确调整堆大小。

    • 如果上一个很难,并且没有得到 OutOfMemoryError 当您应该时,您可以调整GC可调参数...请参阅 JDK6。 0 XX选项,或 JDK6.0 GC调整白皮书。特别查看 -XX:+ UseGCOverheadLimit -XX:+ GCTimeLimit 和相关选项。 (注意这些文档没有记录,但可能有用...)


  • 可能会有 >?只有堆栈跟踪摘录,不能在这里确定。在监视器状态中查找线程被阻塞的周期(与它们持有的线程相比)。我相信 jconsole 可以为你做这个...( yep,在线程标签下,检测死锁

  • 尝试执行多个重复的堆栈跟踪<

  • 对每个说明BLOCKED的堆栈条目进行取证,查找特定的代码行,确定是否有一个监视器有没有。如果有实际的监视器采集,应该很容易识别限制资源。但是,您的某些线程可能会显示阻塞,没有透明可用的监视器,这将是棘手的...


Update: This looks like a memory issue. A 3.8 Gb Hprof file indicated that the JVM was dumping-its-heap when this "blocking" occurred. Our operations team saw that the site wasn't responding, took a stack trace, then shut down the instance. I believe they shut down the site before the heap dump finished. The log had no errors/exceptions/evidence of problems--probably because the JVM was killed before it could generate an error message.

Original Question We had a recent situation where the application appeared --to the end user--to hang. We got a stack trace before the application restart and I found some surprising results: of 527 threads, 463 had thread state BLOCKED.

In the Past In the past blocked thread usually had this issue: 1) some obvious bottleneck: e.g. some database record lock or file system lock problem which caused other threads to wait. 2) All blocked threads would block on the same class/method (e.g. the jdbc or file system clases)

Unusual Data In this case, I see all sorts of classes/methods blocked, including jvm internal classes, jboss classes, log4j, etc, in addition to application classes (including jdbc and lucene calls)

The question what would cause a JVM to block log4j.Hierarchy.getLogger, java.lang.reflect.Constructor.newInstance? Obviously some resource "is scarce" but which resource?

thanks

will

Stack Trace Excerpts

http-0.0.0.0-80-417" daemon prio=6 tid=0x000000000f6f1800 nid=0x1a00 waiting for monitor entry [0x000000002dd5d000]
   java.lang.Thread.State: BLOCKED (on object monitor)
                at sun.reflect.GeneratedConstructorAccessor68.newInstance(Unknown Source)
                at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
                at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
                at java.lang.Class.newInstance0(Class.java:355)
                at java.lang.Class.newInstance(Class.java:308)
                at org.jboss.ejb.Container.createBeanClassInstance(Container.java:630)

http-0.0.0.0-80-451" daemon prio=6 tid=0x000000000f184800 nid=0x14d4 waiting for monitor entry [0x000000003843d000]
   java.lang.Thread.State: BLOCKED (on object monitor)
                at java.lang.Class.getDeclaredMethods0(Native Method)
                at java.lang.Class.privateGetDeclaredMethods(Class.java:2427)
                at java.lang.Class.getMethod0(Class.java:2670)

"http-0.0.0.0-80-449" daemon prio=6 tid=0x000000000f17d000 nid=0x2240 waiting for monitor entry [0x000000002fa5f000]
   java.lang.Thread.State: BLOCKED (on object monitor)
                at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.register(Http11Protocol.java:638)
                - waiting to lock <0x00000007067515e8> (a org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler)
                at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.createProcessor(Http11Protocol.java:630)


"http-0.0.0.0-80-439" daemon prio=6 tid=0x000000000f701800 nid=0x1ed8 waiting for monitor entry [0x000000002f35b000]
   java.lang.Thread.State: BLOCKED (on object monitor)
                at org.apache.log4j.Hierarchy.getLogger(Hierarchy.java:261)
                at org.apache.log4j.Hierarchy.getLogger(Hierarchy.java:242)
                at org.apache.log4j.LogManager.getLogger(LogManager.java:198)

解决方案

These are listed roughly in the order I would try them, depending on the evidence collected:

  • Have you looked at GC behavior? Are you under memory pressure? That could result in newInstance() and a few others above being blocked. Run your VM with -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -verbose:gc and log the output. Are you seeing excessive GC times near the time of failure/lockup?
    • Is the condition repeatable? If so, try with varying heap sizes in the JVM (-Xmx) and see if the behavior changes substantially. If so, look for memory leaks or properly size the heap for your app.
    • If the previous is tough, and you're not getting an OutOfMemoryError when you should, you can tune the GC tunables... see JDK6.0 XX options, or JDK6.0 GC Tuning Whitepaper. Look specifically at -XX:+UseGCOverheadLimit and -XX:+GCTimeLimit and related options. (note these are not well documented, but may be useful...)
  • Might there be a deadlock? With only stack trace excerpts, can't determine here. Look for cycles amongst the monitor states that threads are blocked on (vs. what they hold). I believe jconsole can do this for you ... (yep, under the threads tab, "detect deadlocks")
  • Try doing several repeated stacktraces and look for what changes vs. what stays the same...
  • Do the forensics... for each stack entry that says "BLOCKED", go look up the specific line of code and figure out whether there is a monitor there or not. If there's an actual monitor acquisition, it should be fairly easy to identify the limiting resource. However, some of your threads may show blocked without a transparently available monitor, these will be trickier...

这篇关于Java阻塞问题:为什么JVM块线程在许多不同的类/方法?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆