如何在keras生成器中使用numpy memmap不超过RAM内存? [英] How to use numpy memmap inside keras generator to not exceed RAM memory?
本文介绍了如何在keras生成器中使用numpy memmap不超过RAM内存?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我试图在生成器内部实现numpy.memmap方法,以使用keras训练神经网络,以便不超过内存RAM限制.我将此帖子用作参考,但未成功.这是我的尝试:
I'm trying to implement the numpy.memmap method inside a generator for training a neural network using keras in order to not exceed the memory RAM limit. I'm using as reference this post however unsuccessfully. Here is my attempt:
def My_Generator(path, batch_size, tempo, janela):
samples_per_epoch = sum(1 for line in np.load(path))
number_of_batches = samples_per_epoch/batch_size
#data = np.memmap(path, dtype='float64', mode='r+', shape=(samples_per_epoch, 18), order='F')
data = np.load(path)
# create a memmap array to store the output
X_output = np.memmap('output', dtype='float64', shape=(samples_per_epoch, 96, 100, 17), mode='r+', order='F')
y_output = np.memmap('output', dtype='float64', shape=(samples_per_epoch, 1), mode='r+', order='F')
holder = np.zeros([batch_size, 18], dtype='float64')
counter=0
while 1:
holder[:] = data[counter:batch_size+counter]
X, y = input_3D(holder, tempo, janela)
lenth_X = len(X)
lenth_y = len(y)
print(lenth_X, lenth_y)
y = y.reshape(-1, 1)
X_output[0:lenth_X, :] = X
y_output[0:lenth_y, :] = y
counter += 1
yield X_output[0:lenth_X, :].reshape(-1, 96, 10, 10, 17), y_output[0:lenth_y, :]
#restart counter to yeild data in the next epoch as well
if counter >= number_of_batches:
counter = 0
尽管如此,它仍将块存储在RAM内存中,因此在经过某些时间后,它会超过其限制.
Nonetheless, it is still holding the chunks in the RAM memory so that after some epochs it exceeds its limit.
谢谢
推荐答案
通过此处的方法进行操作:
By following the methods here:
https://stackoverflow.com/a/61472122/2962979
您也许可以通过每次重构memmap对象来解决您的问题.
You may be able to address your issue by reconstructing the memmap object each time.
这篇关于如何在keras生成器中使用numpy memmap不超过RAM内存?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文