解决CPU负载过高的Java应用程序的方式 [英] The way to solve cpu load too high of Java application

查看:648
本文介绍了解决CPU负载过高的Java应用程序的方式的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

今天,我发现我的服务器的CPU负载过高,服务器正在运行一个Java应用程序。



这是我的操作步骤。 / p>


  1. 我使用 top 命令查找应用程序的pid。我使用 top -H -p 25713 命令来查找一些pid,它们是:pid = 25713。

  2. 使用了最多的CPU。如 25719 tomcat 20 0 10.6g 1.5g 13m R 97.8 4.7 314:35.22 java


  3. 我使用 jstack -F 25713 命令打印转储信息,如Gang worker#4(Parallel GC Threads)os_prio = 0 tid = 0x00007f5f10021800 nid = 0x6477 runnable


  4. 我从转储文件中搜索了pid。然后我发现使用了大部分cpu的pids都像Gang worker#4(Parallel GC Threads)os_prio = 0 tid = 0x00007f5f10021800 nid = 0x6477 runnable jstack 命令之后,cpu就变得正常了!


以下是我的问题:


  1. 为什么 GC Threads 使cpu负载过高。

  2. 为什么在我使用 jstack 命令后cpu成为正常。

超过这个时间,每次都是。 以下是一些正常的日志: 2015-10-10T10:17:52.019 + 0800:71128.973:[GC(分配失败)2015-10-10T10:17:52.019 + 0800:71128.973:[ParNew:309991K 206K(348416K),0.0051145秒] 616178K-> 306393K(1009920K),0.0052406秒] [时间:用户= 0.09 sys = 0.00,实际= 0.01秒]

b
$ b

当CPU过高时,GC日志停留在 [GC(分配失败)2015-10-10T10:18:10.564 + 0800:71147.518:[ParNew :,并且没有其他日志。



当我执行 jstack 命令时,打印日志

  2015-10-10T10:17:50.757 + 0800:53501.137:[GC(分配失败)2015-10 -10T10:17:50.757 + 0800:53501.137:[ParNew:210022K-> 245K(235968K),369.6907808secs] 400188K-> 1 
90410K(1022400K),369.6909604sec] [Times:user = 3475.15 sys = 11.69,real = 369.63秒]


解决方案

您可能会受 futex_wait错误影响 a>存在于某些内核版本中。

更一般地说, jstack -F 发送一个信号给进程,会中断任何可能正在睡觉的线程。所以也许GC线程只是旋转 - 等待另一个线程,不知何故错过了唤醒。即如果它确实停留在GC中并发送一个信号来修复问题,那么这可能指向锁或内存排序错误,如果不在内核中然后在JVM中。



不要使用 jstack -F ,你可以尝试发送 SIGBREAK 到这个过程,看看它是否有相同的效果。

Today, I found the cpu of load of my server is too high,and the server is just running a Java application.

Here are my operation steps.

  1. I used top command to find the application's pid. The pid is 25713.

  2. I used top -H -p 25713 command to find some pids which used the most of cpu. Such as 25719 tomcat 20 0 10.6g 1.5g 13m R 97.8 4.7 314:35.22 java.

  3. I used jstack -F 25713 command to print the dump info.Such as "Gang worker#4 (Parallel GC Threads)" os_prio=0 tid=0x00007f5f10021800 nid=0x6477 runnable

  4. I searched the pid from the dump file. Then I found that the pids which used most of cpu are all like "Gang worker#4 (Parallel GC Threads)" os_prio=0 tid=0x00007f5f10021800 nid=0x6477 runnable

  5. After I used the jstack command, then the cpu became normal!

Here are my questions:

  1. Why GC Threads made the cpu load too high.
  2. Why after I used jstack command the cpu became nomal.

More than this time, every time.

Here are some normal logs.2015-10-10T10:17:52.019+0800: 71128.973: [GC (Allocation Failure) 2015-10-10T10:17:52.019+0800: 71128.973: [ParNew: 309991K->206K(348416K), 0.0051145 secs] 616178K->306393K(1009920K), 0.0052406 secs] [Times: user=0.09 sys=0.00, real=0.01 secs]

When the CPU comes too high, the GC log stay in [GC (Allocation Failure) 2015-10-10T10:18:10.564+0800: 71147.518: [ParNew:, and there is no other logs.

When I execute jstack command, the log printed

2015-10-10T10:17:50.757+0800: 53501.137: [GC (Allocation Failure) 2015-10-10T10:17:50.757+0800: 53501.137: [ParNew: 210022K->245K(235968K), 369.6907808 secs] 400188K->1
90410K(1022400K), 369.6909604 secs] [Times: user=3475.15 sys=11.69, real=369.63 secs] 

解决方案

Just guessing, you might be affected by the futex_wait bug present in certain kernel versions.

More generally, jstack -F sends a signal to the process, which will interrupt any thread that may be sleeping. So maybe GC threads just spin-waiting for another thread that somehow missed a wakeup. I.e. if it's indeed stuck in a GC and sending a signal fixes the problem then this may point to a locking or memory ordering bug, if not in the kernel then in the JVM.

Instead of using jstack -F you could try sending SIGBREAK to the process and see if that has the same effect.

这篇关于解决CPU负载过高的Java应用程序的方式的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆