scipy中的分段错误? [英] segmentation fault in scipy?

查看:83
本文介绍了scipy中的分段错误?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在运行大型花车阵列,大约25,000 x 80.

Python(scipy)似乎没有接近使用4GB有线内存,

但是在演出的各个部分。小批量的一切正常工作

大约10,000 x 80的数据,最多使用600 MB的mem。有什么想法吗?

这对于scipy来说太多了吗?


非常感谢Conor


Traceback(最近一次)最后打电话):

文件C:\ Temp \ CR_2 \\\ run.py,第68行,在?

net.rProp(1.2 ,. 5,.000001,50.0,输入,输出,1)

文件" /Users/conorrob/Desktop/CR_2/Network.py" ;,第230行,在rProp中

print scipy.trace(error * scipy.transpose(error))

File" D:\Python24 \Lib\site-packages \ numpy \ core \ defmatrix.py ;,行

149,在

__mul__

返回N.dot(自我,其他)

MemoryError < blockquote class =post_quotes>



解决方案

co ************ @ gmail.com 写道:

我正在运行大型数组浮动,大约25,000 x 80.
Python(scipy)似乎没有接近使用4GB的有线内存,
但是在演出的各个部分。在10,000 x 80左右的小批量数据中,一切正常,使用最大约600mb的内存。任何想法?
这对于scipy来说是不是太多了?

感谢Conor

Traceback(最近一次调用最后一次):
文件" C :\ Temp\CR_2 \run.py",第68行,在?
net.rProp(1.2,.5,.000001,50.0,输入,输出,1)
文件 /Users/conorrob/Desktop/CR_2/Network.py" ;,第230行,在rProp中
print scipy.trace(error * scipy.transpose(error))
文件D:\Python24 \\ \\ Lib\site-packages\\\
umpy\core\defmatrix.py",line
149,in
__mul__
返回N.dot(自我,其他)
MemoryError




您应该在numpy-discussion列表中提出这个问题,以获得更好的

反馈。

是否实际上是段错误还是给你这个内存错误?

需要创建的临时数组可能是

额外内存的来源。

一般来说,你应该可以使用你系统上的所有内存

(除非你在一个64位系统,并没有使用Python 2.5)。


-Travis


如果我运行它从shell(unix)我得到:分段错误,并在我的进程中看到一个

核心转储。如果我在python shell中运行它,我得到上面的



文件" D:\Python24 \Lib\site-packages \ numpy \ core\defmatrix.py",line

149,in

__mul__

返回N.dot(自我,其他)

MemoryError


我作为scipy开发者之一的体验,这个数据太多了吗?


谢谢


co ****** ******@gmail.com 写道:

我正在运行大型花车阵列,大约25,000 x 80.
Python(scipy)没有似乎接近使用4GB的有线内存,
但是在演出的各个部分。在10,000 x 80左右的小批量数据中,一切正常,使用最大约600mb的内存。任何想法?
这对于scipy来说是不是太多了?

感谢Conor

Traceback(最近一次调用最后一次):
文件" C :\ Temp\CR_2 \run.py",第68行,在?
net.rProp(1.2,.5,.000001,50.0,输入,输出,1)
文件 /Users/conorrob/Desktop/CR_2/Network.py" ;,第230行,在rProp中
print scipy.trace(error * scipy.transpose(error))
文件D:\Python24 \\ \\ Lib\site-packages\\\
umpy\core\defmatrix.py",line
149,in
__mul__
返回N.dot(自我,其他)
MemoryError




这不是段错误。这是你看到的唯一错误吗?或者你真的在某个地方看到一个段错误吗?


如果error.shape ==(25000,80),那么点(错误,转置(错误) )将返回一个形状数组(25000,25000)。

。假设双精度浮点数,

该阵列将占用大约4768兆字节的内存,比你拥有的多。

的内存使用量不会接近4千兆字节,因为非常大的返回数组的分配失败,因此大块的内存永远不会被分配。 br />

有两种可能性:


1.正如Travis所说,numpy不会创建阵列,因为它仍然是
由于Python 2.4 C API,
32位受限。这已经用Python 2.5解决了。


2. numpy的默认构建使用普通的malloc(3)来分配内存,并且

它可能没有创造如此大的记忆。


-

Robert Kern


我有我开始相信整个世界都是一个谜,一个无害的谜团,因为我们疯狂地试图解释它,好像它有一个潜在的真相,这让我感到非常可怕。

- Umberto Eco


I''m running operations large arrays of floats, approx 25,000 x 80.
Python (scipy) does not seem to come close to using 4GB of wired mem,
but segments at around a gig. Everything works fine on smaller batches
of data around 10,000 x 80 and uses a max of ~600mb of mem. Any Ideas?
Is this just too much data for scipy?

Thanks Conor

Traceback (most recent call last):
File "C:\Temp\CR_2\run.py", line 68, in ?
net.rProp(1.2, .5, .000001, 50.0, input, output, 1)
File "/Users/conorrob/Desktop/CR_2/Network.py", line 230, in rProp
print scipy.trace(error*scipy.transpose(error))
File "D:\Python24\Lib\site-packages\numpy\core\defmatrix.py", line
149, in
__mul__
return N.dot(self, other)
MemoryError



解决方案

co************@gmail.com wrote:

I''m running operations large arrays of floats, approx 25,000 x 80.
Python (scipy) does not seem to come close to using 4GB of wired mem,
but segments at around a gig. Everything works fine on smaller batches
of data around 10,000 x 80 and uses a max of ~600mb of mem. Any Ideas?
Is this just too much data for scipy?

Thanks Conor

Traceback (most recent call last):
File "C:\Temp\CR_2\run.py", line 68, in ?
net.rProp(1.2, .5, .000001, 50.0, input, output, 1)
File "/Users/conorrob/Desktop/CR_2/Network.py", line 230, in rProp
print scipy.trace(error*scipy.transpose(error))
File "D:\Python24\Lib\site-packages\numpy\core\defmatrix.py", line
149, in
__mul__
return N.dot(self, other)
MemoryError



You should ask this question on the numpy-discussion list for better
feedback.
Does it actually segfault or give you this Memory Error?
Temporary arrays that need to be created could be the source of the
extra memory.
Generally, you should be able to use all the memory on your system
(unless you are on a 64-bit system and are not using Python 2.5).

-Travis


If I run it from the shell (unix) I get: Segmentation fault and see a
core dump in my processes. If I run it in the python shell I get as
above:
File "D:\Python24\Lib\site-packages\numpy\core\defmatrix.py", line
149, in
__mul__
return N.dot(self, other)
MemoryError

I your experience as one of the dev of scipy, is this too much data?

thank you


co************@gmail.com wrote:

I''m running operations large arrays of floats, approx 25,000 x 80.
Python (scipy) does not seem to come close to using 4GB of wired mem,
but segments at around a gig. Everything works fine on smaller batches
of data around 10,000 x 80 and uses a max of ~600mb of mem. Any Ideas?
Is this just too much data for scipy?

Thanks Conor

Traceback (most recent call last):
File "C:\Temp\CR_2\run.py", line 68, in ?
net.rProp(1.2, .5, .000001, 50.0, input, output, 1)
File "/Users/conorrob/Desktop/CR_2/Network.py", line 230, in rProp
print scipy.trace(error*scipy.transpose(error))
File "D:\Python24\Lib\site-packages\numpy\core\defmatrix.py", line
149, in
__mul__
return N.dot(self, other)
MemoryError



This is not a segfault. Is this the only error you see? Or are you actually
seeing a segfault somewhere?

If error.shape == (25000, 80), then dot(error, transpose(error)) will be
returning an array of shape (25000, 25000). Assuming double precision floats,
that array will take up about 4768 megabytes of memory, more than you have. The
memory usage doesn''t go up near 4 gigabytes because the allocation of the very
large returned array fails, so the large chunk of memory never gets allocated.

There are two possibilities:

1. As Travis mentioned, numpy won''t create the array because it is still
32-bit-limited due to the Python 2.4 C API. This has been resolved with Python 2.5.

2. The default build of numpy uses plain-old malloc(3) to allocate memory, and
it may be failing to create such large chunks of memory.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco


这篇关于scipy中的分段错误?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆