有没有办法降低scipy/numpy精度以减少内存消耗? [英] Is there a way to reduce scipy/numpy precision to reduce memory consumption?

查看:317
本文介绍了有没有办法降低scipy/numpy精度以减少内存消耗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在我的64位Debian/Lenny系统(4GB RAM + 4GB交换分区)上,我可以成功完成操作:

On my 64-bit Debian/Lenny system (4GByte RAM + 4GByte swap partition) I can successfully do:

v=array(10000*random([512,512,512]),dtype=np.int16)
f=fftn(v)

但f为np.complex128时,内存消耗令人震惊,而如果没有MemoryError追溯,我将无法对结果做更多​​的事情(例如,先调制系数然后f=ifftn(f)).

but with f being a np.complex128 the memory consumption is shocking, and I can't do much more with the result (e.g modulate the coefficients and then f=ifftn(f) ) without a MemoryError traceback.

除了安装更多的RAM和/或扩展我的交换分区外,还有其他方法来控制scipy/numpy的默认精度"并让它计算一个complex64数组吗?

Rather than installing some more RAM and/or expanding my swap partitions, is there some way of controlling the scipy/numpy "default precision" and have it compute a complex64 array instead ?

我知道以后可以使用f=array(f,dtype=np.complex64)减少它;我希望它实际上能够以32位精度和一半的内存来完成FFT工作.

I know I can just reduce it afterwards with f=array(f,dtype=np.complex64); I'm looking to have it actually do the FFT work in 32-bit precision and half the memory.

推荐答案

在scipy的fft函数中似乎没有任何函数可以执行此操作(请参阅

It doesn't look like there's any function to do this in scipy's fft functions ( see http://www.astro.rug.nl/efidad/scipy.fftpack.basic.html ).

除非您能够找到适用于python的定点FFT库,否则您所需的功能就不太可能存在,因为您的本机硬件浮点格式为128位.看起来确实可以使用rfft方法获取FFT的实值分量(无相位),这将节省一半的RAM.

Unless you're able to find a fixed point FFT library for python, it's unlikely that the function you want exists, since your native hardware floating point format is 128 bits. It does look like you could use the rfft method to get just the real-valued components (no phase) of the FFT, and that would save half your RAM.

我在交互式python中运行了以下代码:

I ran the following in interactive python:

>>> from numpy import *
>>>  v = array(10000*random.random([512,512,512]),dtype=int16)
>>> shape(v)
(512, 512, 512)
>>> type(v[0,0,0])
<type 'numpy.int16'>

这时,python的RSS(常驻集大小)为265MB.

At this point the RSS (Resident Set Size) of python was 265MB.

f = fft.fft(v)

这时,python为2.3GB的RSS.

And at this point the RSS of python 2.3GB.

>>> type(f)
<type 'numpy.ndarray'>
>>> type(f[0,0,0]) 
<type 'numpy.complex128'>
>>> v = []

由于我已经释放了v.,这时RSS降到了2.0GB.

And at this point the RSS goes down to 2.0GB, since I've free'd up v.

使用"fft.rfft(v)"计算实值只会产生1.3GB的RSS. (几乎是预期的一半)

Using "fft.rfft(v)" to compute real-values only results in a 1.3GB RSS. (almost half, as expected)

正在做:

>>> f = complex64(fft.fft(v))

是两全其美,因为它首先计算complex128版本(2.3GB),然后将其复制到complex64版本(1.3GB),这意味着我的计算机上的RSS峰值为3.6GB,然后稳定下来再次达到1.3GB.

Is the worst of both worlds, since it first computes the complex128 version (2.3GB) and then copies that into the complex64 version (1.3GB) which means the peak RSS on my machine was 3.6GB, and then it settled down to 1.3GB again.

我认为,如果您有4GB的RAM,那么一切应该都可以正常工作(就像对我一样).有什么问题

I think that if you've got 4GB RAM, this should all work just fine (as it does for me). What's the issue?

这篇关于有没有办法降低scipy/numpy精度以减少内存消耗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆