用于将numpy数组转换为共享内存数组的代码的Pickle错误 [英] Pickle error on code for converting numpy array into shared memory array
问题描述
尝试在此处使用代码 https://stackoverflow.com/a/15390953/378594 转换numpy阵列放入共享内存阵列并返回.运行以下代码:
Trying to use the code here https://stackoverflow.com/a/15390953/378594 to convert a numpy array into a shared memory array and back. Running the following code:
shared_array = shmarray.ndarray_to_shm(my_numpy_array)
,然后将shared_array作为参数传递给多处理池的参数列表:
and then passing the shared_array as an argument in the list of argument for a multiprocessing pool:
pool.map(my_function, list_of_args_arrays)
list_of_args_arrays
包含我的共享数组和其他参数的地方.
Where list_of_args_arrays
contains my shared array and other arguments.
它导致以下错误
PicklingError: Can't pickle <class 'multiprocessing.sharedctypes.c_double_Array_<array size>'>: attribute lookup multiprocessing.sharedctypes.c_double_Array_<array size> failed
其中<array_size>
是我的numpy数组的线性大小.
Where <array_size>
is the linear size of my numpy array.
我猜想numpy ctypes发生了变化或类似的事情?
I guess something has changed in numpy ctypes or something like that?
我只需要访问共享信息.这些过程不会进行任何编辑.
I only need access to shared information. No editing will be done by the processes.
调用池的函数位于一个类中.该类已启动,并且函数由main.py文件调用.
The function that calls the pool lies within a class. The class is initiated and the function is called by a main.py file.
推荐答案
显然,使用multiprocessing.Pool
时,所有参数都被腌制,因此使用multiprocessing.Array
时没有任何用处.更改代码以使其使用一系列进程即可达到目的.
Apparently when using multiprocessing.Pool
all arguments are pickled, and so there was no use using multiprocessing.Array
. Changing the code so that it uses an array of processes did the trick.
这篇关于用于将numpy数组转换为共享内存数组的代码的Pickle错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!