Keras:在训练期间如何在自定义生成器中获得模型预测(或最后一层输出)? [英] Keras: How to get model predictions( or last layer output) in a custom generator during training?

查看:84
本文介绍了Keras:在训练期间如何在自定义生成器中获得模型预测(或最后一层输出)?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我制作了一个自定义生成器,在训练过程中需要我对模型的预测,然后根据真实标签对模型进行一些计算。因此,我先保存模型,然后在当前状态下调用 model.predict()

I have made a custom generator in which I need my model's prediction, during training, to do some calculations on it, before it is trained against the true labels. Therefore, I save the model first and then call model.predict() on the current state.

from keras.models import load_model
def custom_generator(model):
  while True:
    state, target_labels = next(train_it)

    model.save('my_model.h5')
    #pause training and do some calculations on the output of the model trained so far     
    print(state)
    print(target_labels)
    model.predict(state)         
    #resume training
    #model = load_model('my_model.h5')

    yield state, target_labels

model3.fit_generator(custom_generator(model3), steps_per_epoch=1, epochs = 10)
loss = model3.evaluate_generator(test_it, steps=1)
loss

由于在中调用 model.predict(model)而导致以下错误custom_generator()

I get the following error due to calling model.predict(model) in the custom_generator()

错误:


ValueError :张量Tensor( de nse_2 / Softmax:0,shape =(?, 200),
dtype = float32)不是此图的元素。

ValueError: Tensor Tensor("dense_2/Softmax:0", shape=(?, 200), dtype=float32) is not an element of this graph.

请帮助我在训练过程中如何在自定义生成器中获得模型预测(或最后一层输出)。

Kindly, help me how to get model predictions(or last layer output) in a custom generator during training.

这是我的模型:

#libraries
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
from matplotlib import pyplot
from keras.applications.vgg16 import VGG16

model = VGG16(include_top=False, weights='imagenet')
print(model.summary())

#add layers
z = Conv2D(1, (3, 3), activation='relu')(model.output)
z = Conv2D(1,(1,1), activation='relu')(z)
z = GlobalAveragePooling2D()(z)
predictions3 = Dense(200, activation='softmax')(z)
model3 = Model(inputs=model.input, outputs=predictions3)
for layer in model3.layers[:20]:
   layer.trainable = False
for layer in model3.layers[20:]:
   layer.trainable = True
model3.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy')

用于加载训练和测试数据的图像数据生成器

Image data generators for loading training and testing data

from keras.preprocessing.image import ImageDataGenerator
# create a data generator
datagen = ImageDataGenerator()
# load and iterate training dataset
train_it = datagen.flow_from_directory('DATA/C_Train/', class_mode='categorical', batch_size=1)
test_it = datagen.flow_from_directory('DATA/C_Test/', class_mode='categorical', batch_size=1)


推荐答案

您最好的选择可能是通过编写自定义火车循环train_on_batch fit ;前者的唯一不利条件是 use_multiprocessing = True 或使用回调-并非如此。以下是使用 train_on_batch - if 使用 fit 代替的实现(对于多处理,回调,等等),确保一次只喂一个批次,并提供 no 验证数据(使用 model.evaluate )-否则控制流程中断。 (此外,自定义回调是有效的,但涉及其他选择)

Your best bet may be to write a custom train loop via train_on_batch or fit; the former's only disadvantaged if use_multiprocessing=True, or using callbacks - which isn't the case. Below is an implementation with train_on_batch - if you use fit instead (for multiprocessing, callbacks, etc), make sure you feed only one batch at a time, and provide no validation data (use model.evaluate instead) - else the control flow breaks. (Also, a custom Callback is a valid, but involved alternative)



自定义火车环

iters_per_epoch = len(train_it) // batch_size
num_epochs = 5
outs_store_freq = 20 # in iters
print_loss_freq = 20 # in iters

iter_num = 0
epoch_num = 0
model_outputs = []
loss_history  = []

while epoch_num < num_epochs:
    while iter_num < iters_per_epoch:
        x_train, y_train = next(train_it)
        loss_history += [model3.train_on_batch(x_train, y_train)]

        x_test, y_test = next(test_it)
        if iter_num % outs_store_freq == 0:
            model_outputs += [model3.predict(x_test)]
        if iter_num % print_loss_freq == 0:
            print("Iter {} loss: {}".format(iter_num, loss_history[-1]))

        iter_num += 1
    print("EPOCH {} FINISHED".format(epoch_num + 1))
    epoch_num += 1
    iter_num = 0 # reset counter



完整代码

from keras.models import Sequential
from keras.layers import Dense, Conv2D, GlobalAveragePooling2D
from keras.models import Model
from keras.optimizers import SGD
from keras.applications.vgg16 import VGG16
from keras.preprocessing.image import ImageDataGenerator

model = VGG16(include_top=False, weights='imagenet')
print(model.summary())

#add layers
z = Conv2D(1, (3, 3), activation='relu')(model.output)
z = Conv2D(1,(1,1), activation='relu')(z)
z = GlobalAveragePooling2D()(z)
predictions3 = Dense(2, activation='softmax')(z)
model3 = Model(inputs=model.input, outputs=predictions3)

for layer in model3.layers[:20]:
   layer.trainable = False
for layer in model3.layers[20:]:
   layer.trainable = True

model3.compile(optimizer=SGD(lr=0.0001, momentum=0.9), 
               loss='categorical_crossentropy')



batch_size = 1
datagen = ImageDataGenerator()
train_it = datagen.flow_from_directory('DATA/C_Train/', 
                                        class_mode='categorical', 
                                        batch_size=batch_size)
test_it = datagen.flow_from_directory('DATA/C_Test/', 
                                      class_mode='categorical', 
                                      batch_size=batch_size)

[自定义循环此处]

奖金代码:获取 any的输出层,在下面使用:

BONUS CODE: to get outputs of any layer, use below:

def get_layer_outputs(model, layer_name, input_data, learning_phase=1):
    outputs   = [layer.output for layer in model.layers if layer_name in layer.name]
    layers_fn = K.function([model.input, K.learning_phase()], outputs)
    return [layers_fn([input_data,learning_phase])][0]

outs = get_layer_outputs(model, 'dense_1', x_test, 0) # 0 == inference mode

这篇关于Keras:在训练期间如何在自定义生成器中获得模型预测(或最后一层输出)?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆