Linux核心转储太大! [英] Linux core dumps are too large!

查看:78
本文介绍了Linux核心转储太大!的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

最近我一直注意到我的应用程序生成的核心转储文件的大小有所增加.最初,它们的大小约为5MB,包含约5个堆栈帧,现在我的核心转储大于2GB,并且其中包含的信息与较小的转储没有区别.

Recently I've been noticing an increase in the size of the core dumps generated by my application. Initially, they were just around 5MB in size and contained around 5 stack frames, and now I have core dumps of > 2GBs and the information contained within them are no different from the smaller dumps.

有什么办法可以控制生成的核心转储的大小?它们不应该至少比应用程序二进制文件本身小吗?

Is there any way I can control the size of core dumps generated? Shouldn't they be at least smaller than the application binary itself?

二进制文件的编译方式如下:

Binaries are compiled in this way:

  • 在发布模式下通过调试进行编译 符号(即-g中的-g编译器选项 GCC).
  • 将调试符号复制到一个
    单独的文件并从
    二进制.
  • 添加了GNU调试符号链接 到二进制文件.
  • Compiled in release mode with debug symbols (ie, -g compiler option in GCC).
  • Debug symbols are copied onto a
    separate file and stripped from the
    binary.
  • A GNU debug symbols link is added to the binary.

在应用程序的开头,有一个setrlimit调用,它将核心限制设置为无穷大-这是问题吗?

At the beginning of the application, there's a call to setrlimit which sets the core limit to infinity -- Is this the problem?

推荐答案

是-不要分配太多内存:-)

Yes - don't allocate so much memory :-)

核心转储包含应用程序地址空间的完整图像,包括代码,堆栈和堆(已分配对象等)

The core dump contains the full image of your application's address space, including code, stack and heap (malloc'd objects etc.)

如果您的核心转储大于2GB,则意味着在某个时候您分配了那么多的内存.

If your core dumps are >2GB, that implies that at some point you allocated that much memory.

您可以使用setrlimit设置核心转储大小的下限,否则可能会遇到无法解码的核心转储(因为它不完整)的风险.

You can use setrlimit to set a lower limit on core dump size, at the risk of ending up with a core dump that you can't decode (because it's incomplete).

这篇关于Linux核心转储太大!的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆