Python:每个对象类型(或源代码行)的内存使用统计信息 [英] Python: memory usage statistics per object-types (or source code line)

查看:193
本文介绍了Python:每个对象类型(或源代码行)的内存使用统计信息的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用Python做了一些沉重的计算(使用OpenCV和Numpy),最后我得到了大量的内存使用(> 1GB),所有的参考文件都应该没有了,我只有最终的结果这不应该超过几MB)。



要调试这个,这将是很好,如果我可以得到一些统计,显示哪些对象实例有多少什么类型的,由他们占用的内存总量(每个对象类)排序。



甚至更好:不是每个对象类,而是每个源代码行的对象被创建(因此我猜这个信息是不可用的,除非我激活一些调试在Python,这将使计算太慢,所以我不知道这将是有帮助的)。



我可以以某种方式获得一些统计信息吗?或者我将如何调试?






有些已经误解了我:我只需要知道如何调试内存使用情况。处理/运行时间是完美的。

解决方案

我想你正在寻找一个python分析器;



你有一堆可以使用的,例如 Heapy 个人资料或cprofile Pysize ...



使用Heapy的示例



您必须在代码中的某个地方添加此代码段:

从$ g code $ from guppy import hpy 
h = hpy()
打印h.heap()

它会给你输出:

 一组132527对象的分区。总大小= 8301532字节。 
指数计数%大小%累积%种类(类/类)
0 35144 27 2140412 26 2140412 26 str
1 38397 29 1309020 16 3449432 42元组
2 530 0 739856 9 4189288 50 dict(无所有者)

cprofile示例: / p>

你可以这样运行:

  python -m cProfile脚本。






> 以0.000秒为单位的5个函数调用

订购者:标准名称

ncalls tottime percall cumtime percall文件名:lineno(功能)
1 0.000 0.000 0.000 0.000< string>:1(< module>)
1 0.000 0.000 0.000 0.000 myscript.py:1(<module>)
1 0.000 0.000 0.000 0.000 {execfile}
1 0.000 0.000 0.000 0.000 {method'disable'of'_lsprof.Profiler'objects}
1 0.000 0.000 0.000 0.000 {范围}

您还可以使用 gc 模块,以了解为什么python不释放你的记忆,并要求他释放内存使用 gc.collect()



顺便说一句,你看过 numpy ,我认为如果你像你说的那样做很重的计算,这更合适。


I am doing some heavy calculations with Python (using OpenCV and Numpy) and in the end, I end up with a lot of memory usage (>1GB) whereby all refs should be gone and I only have the end-result (which should not be more than a few MB).

To debug this, it would be nice if I could get some stats somehow which show me how much object instances there are of what type, ordered by the total amount of memory they take (per object class).

Or even nicer: Not per object class but per source code line where the object was created (whereby I guess this info is not available unless I activate some debugging in Python which would make the calculation too slow, so I am not sure if that would be helpful).

Can I get some stats like this somehow? Or how would I debug this?


Some has missunderstood me: I only need to know how to debug the memory usage. Processing/run- time is perfect.

解决方案

I think you're searching for a python profiler ;

you have a bunch of them that you can use , like Heapy, profile or cprofile , Pysize ...

example using Heapy :

you have to include this snippet somewhere in your code:

from guppy import hpy
h = hpy()
print h.heap()

and it will give you as output:

Partition of a set of 132527 objects. Total size = 8301532 bytes.
Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
0  35144  27  2140412  26   2140412  26 str
1  38397  29  1309020  16   3449432  42 tuple
2    530   0   739856   9   4189288  50 dict (no owner)

example with cprofile :

you can run it like this:

python -m cProfile script.py

Output:

         5 function calls in 0.000 CPU seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.000    0.000    0.000    0.000 <string>:1(<module>)
        1    0.000    0.000    0.000    0.000 myscript.py:1(<module>)
        1    0.000    0.000    0.000    0.000 {execfile}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}
        1    0.000    0.000    0.000    0.000 {range}

You can also use gc module to know why python is not freeing your memory, and to ask him to free memory using gc.collect().

By the way have you looked at numpy, i think it more suitable if you're doing heavy calculation like you said.

这篇关于Python:每个对象类型(或源代码行)的内存使用统计信息的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆