VGG,喀拉拉邦的知觉丧失 [英] VGG, perceptual loss in keras

查看:88
本文介绍了VGG,喀拉拉邦的知觉丧失的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想知道是否可以在keras中将自定义模型添加到损失函数中.例如:

I'm wondering if it's possible to add a custom model to a loss function in keras. For example:

def model_loss(y_true, y_pred):
    inp = Input(shape=(128, 128, 1))
    x = Dense(2)(inp)
    x = Flatten()(x)

    model = Model(inputs=[inp], outputs=[x])
    a = model(y_pred)
    b = model(y_true)

    # calculate MSE
    mse = K.mean(K.square(a - b))
    return mse

这是一个简化的示例.我实际上会在损失中使用VGG网,所以只是尝试了解keras的机制.

This is a simplified example. I'll actually be using a VGG net in the loss, so just trying to understand the mechanics of keras.

推荐答案

通常的做法是将VGG附加到模型的末尾,并确保在编译之前其所有层都具有trainable=False.

The usual way of doing that is appending your VGG to the end of your model, making sure all its layers have trainable=False before compiling.

然后您重新计算您的Y_train.

Then you recalculate your Y_train.

假设您具有以下模型:

mainModel - the one you want to apply a loss function    
lossModel - the one that is part of the loss function you want   

创建一个将一个模型附加到另一个模型:

Create a new model appending one to another:

from keras.models import Model

lossOut = lossModel(mainModel.output) #you pass the output of one model to the other

fullModel = Model(mainModel.input,lossOut) #you create a model for training following a certain path in the graph. 

此模型将具有与mainModel和lossModel完全相同的权重,训练该模型将影响其他模型.

This model will have the exact same weights of mainModel and lossModel, and training this model will affect the other models.

在编译之前,请确保lossModel不可训练:

Make sure lossModel is not trainable before compiling:

lossModel.trainable = False
for l in lossModel.layers:
    l.trainable = False

fullModel.compile(loss='mse',optimizer=....)

现在调整您的数据进行训练:

Now adjust your data for training:

fullYTrain = lossModel.predict(originalYTrain)

最后进行培训:

fullModel.fit(xTrain, fullYTrain, ....)

这篇关于VGG,喀拉拉邦的知觉丧失的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆