Keras:多个大型数据集的批量训练 [英] Keras: batch training for multiple large datasets
问题描述
这个问题是关于在 Keras 中对多个大文件进行训练的常见问题,这些文件联合起来太大而无法适应 GPU 内存.我使用的是 Keras 1.0.5,我想要一个不需要 1.0.6 的解决方案.fchollet 描述了一种方法这里和这里:
this question regards the common problem of training on multiple large files in Keras which are jointly too large to fit on GPU memory. I am using Keras 1.0.5 and I would like a solution that does not require 1.0.6. One way to do this was described by fchollet here and here:
# Create generator that yields (current features X, current labels y)
def BatchGenerator(files):
for file in files:
current_data = pickle.load(open("file", "rb"))
X_train = current_data[:,:-1]
y_train = current_data[:,-1]
yield (X_train, y_train)
# train model on each dataset
for epoch in range(n_epochs):
for (X_train, y_train) in BatchGenerator(files):
model.fit(X_train, y_train, batch_size = 32, nb_epoch = 1)
然而,我担心模型的状态不会被保存,而是模型不仅在时代之间而且在数据集之间重新初始化.每个Epoch 1/1"代表以下不同数据集的训练:
However I fear that the state of the model is not saved, rather that the model is reinitialized not only between epochs but also between datasets. Each "Epoch 1/1" represents training on a different dataset below:
~~~~~ Epoch 0 ~~~~~~
~~~~~ Epoch 0 ~~~~~~
纪元 1/1295806/295806 [==============================] - 13s - 损失:15.7517
时代 1/1407890/407890 [==============================] - 19s - 损失:15.8036
时代 1/1383188/383188 [==============================] - 19s - 损失:15.8130
~~~~~第1期~~~~~~
Epoch 1/1
295806/295806 [==============================] - 13s - loss: 15.7517
Epoch 1/1
407890/407890 [==============================] - 19s - loss: 15.8036
Epoch 1/1
383188/383188 [==============================] - 19s - loss: 15.8130
~~~~~ Epoch 1 ~~~~~~
纪元 1/1295806/295806 [==============================] - 14s - 损失:15.7517
时代 1/1407890/407890 [==============================] - 20 秒 - 损失:15.8036
时代 1/1383188/383188 [==============================] - 15s - 损失:15.8130
Epoch 1/1
295806/295806 [==============================] - 14s - loss: 15.7517
Epoch 1/1
407890/407890 [==============================] - 20s - loss: 15.8036
Epoch 1/1
383188/383188 [==============================] - 15s - loss: 15.8130
我知道可以使用 model.fit_generator 但由于上述方法被反复建议作为批量训练的一种方式,我想知道我做错了什么.
I am aware that one can use model.fit_generator but as the method above was repeatedly suggested as a way of batch training I would like to know what I am doing wrong.
感谢您的帮助,
最大
推荐答案
我已经有一段时间没有遇到这个问题了,但我记得我曾经使用过
Kera 通过 Python 生成器提供数据的功能,即 model = Sequential();model.fit_generator(...)
.
It has been a while since I faced that problem but I remember that I used
Kera's functionality to provide data through Python generators, i.e. model = Sequential(); model.fit_generator(...)
.
示例代码片段(应该是不言自明的)
An exemplary code snippet (should be self-explanatory)
def generate_batches(files, batch_size):
counter = 0
while True:
fname = files[counter]
print(fname)
counter = (counter + 1) % len(files)
data_bundle = pickle.load(open(fname, "rb"))
X_train = data_bundle[0].astype(np.float32)
y_train = data_bundle[1].astype(np.float32)
y_train = y_train.flatten()
for cbatch in range(0, X_train.shape[0], batch_size):
yield (X_train[cbatch:(cbatch + batch_size),:,:], y_train[cbatch:(cbatch + batch_size)])
model = Sequential()
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
train_files = [train_bundle_loc + "bundle_" + cb.__str__() for cb in range(nb_train_bundles)]
gen = generate_batches(files=train_files, batch_size=batch_size)
history = model.fit_generator(gen, samples_per_epoch=samples_per_epoch, nb_epoch=num_epoch,verbose=1, class_weight=class_weights)
这篇关于Keras:多个大型数据集的批量训练的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!