在Keras中对model.fit进行循环是否合乎逻辑? [英] Is it logical to loop on model.fit in Keras?
本文介绍了在Keras中对model.fit进行循环是否合乎逻辑?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
在Keras中执行以下操作以免耗尽内存是否合乎逻辑?
Is it logical to do as below in Keras in order not to run out of memory?
for path in ['xaa', 'xab', 'xac', 'xad']:
x_train, y_train = prepare_data(path)
model.fit(x_train, y_train, batch_size=50, epochs=20, shuffle=True)
model.save('model')
推荐答案
是的,但如果每次迭代都生成一个批处理,则最好使用model.train_on_batch
.这消除了fit
附带的一些开销.
It is, but prefer model.train_on_batch
if each iteration is generating a single batch. This eliminates some overhead that comes with fit
.
您也可以尝试创建一个生成器并使用model.fit_generator()
:
You can also try to create a generator and use model.fit_generator()
:
def dataGenerator(pathes, batch_size):
while True: #generators for keras must be infinite
for path in pathes:
x_train, y_train = prepare_data(path)
totalSamps = x_train.shape[0]
batches = totalSamps // batch_size
if totalSamps % batch_size > 0:
batches+=1
for batch in range(batches):
section = slice(batch*batch_size,(batch+1)*batch_size)
yield (x_train[section], y_train[section])
创建和使用:
gen = dataGenerator(['xaa', 'xab', 'xac', 'xad'], 50)
model.fit_generator(gen,
steps_per_epoch = expectedTotalNumberOfYieldsForOneEpoch
epochs = epochs)
这篇关于在Keras中对model.fit进行循环是否合乎逻辑?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文