cProfile占用大量内存 [英] cProfile taking a lot of memory
问题描述
我正在尝试在python中分析我的项目,但我的内存不足.
I am attempting to profile my project in python, but I am running out of memory.
我的项目本身占用大量内存,但是即使在cProfile下运行时,即使是一半大小的运行也都因"MemoryError"而终止.
My project itself is fairly memory intensive, but even half-size runs are dieing with "MemoryError" when run under cProfile.
进行较小的运行不是一个好的选择,因为我们怀疑运行时间是超线性扩展的,并且我们试图发现在大型运行期间哪些功能占主导地位.
Doing smaller runs is not a good option, because we suspect that the run time is scaling super-linearly, and we are trying to discover which functions are dominating during large runs.
为什么cProfile占用这么多内存?我可以减少花费吗?这正常吗?
Why is cProfile taking so much memory? Can I make it take less? Is this normal?
推荐答案
已更新:由于cProfile已内置在当前版本的Python(_lsprof扩展名)中,因此应使用主分配器.如果这对您不起作用,则Python 2.7.1具有--with-valgrind
编译器选项,该选项会导致它在运行时切换为使用malloc()
.很好,因为它避免了使用禁止文件.您可以构建一个仅用于概要分析的版本,然后在valgrind下运行Python应用程序,以查看分析器所做的所有分配以及使用自定义分配方案的所有C扩展.
Updated: Since cProfile is built into current versions of Python (the _lsprof extension) it should be using the main allocator. If this doesn't work for you, Python 2.7.1 has a --with-valgrind
compiler option which causes it to switch to using malloc()
at runtime. This is nice since it avoids having to use a suppressions file. You can build a version just for profiling, and then run your Python app under valgrind to look at all allocations made by the profiler as well as any C extensions which use custom allocation schemes.
(以下为其他答案的原始答案):
(Rest of original answer follows):
也许尝试查看分配的去向.如果您在代码中有一个位置可以定期转储内存使用量,则可以使用 guppy
查看分配:
Maybe try to see where the allocations are going. If you have a place in your code where you can periodically dump out the memory usage, you can use guppy
to view the allocations:
import lxml.html
from guppy import hpy
hp = hpy()
trees = {}
for i in range(10):
# do something
trees[i] = lxml.html.fromstring("<html>")
print hp.heap()
# examine allocations for specific objects you suspect
print hp.iso(*trees.values())
这篇关于cProfile占用大量内存的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!