VGG,keras 中的感知损失 [英] VGG, perceptual loss in keras

查看:19
本文介绍了VGG,keras 中的感知损失的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想知道是否可以将自定义模型添加到 keras 的损失函数中.例如:

I'm wondering if it's possible to add a custom model to a loss function in keras. For example:

def model_loss(y_true, y_pred):
    inp = Input(shape=(128, 128, 1))
    x = Dense(2)(inp)
    x = Flatten()(x)

    model = Model(inputs=[inp], outputs=[x])
    a = model(y_pred)
    b = model(y_true)

    # calculate MSE
    mse = K.mean(K.square(a - b))
    return mse

这是一个简化的例子.我实际上会在损失中使用 VGG 网络,所以只是想了解 keras 的机制.

This is a simplified example. I'll actually be using a VGG net in the loss, so just trying to understand the mechanics of keras.

推荐答案

通常的做法是将 VGG 附加到模型的末尾,确保其所有层都具有 trainable=False在编译之前.

The usual way of doing that is appending your VGG to the end of your model, making sure all its layers have trainable=False before compiling.

然后你重新计算你的 Y_train.

Then you recalculate your Y_train.

假设您有这些模型:

mainModel - the one you want to apply a loss function    
lossModel - the one that is part of the loss function you want   

创建一个附加到另一个的新模型:

Create a new model appending one to another:

from keras.models import Model

lossOut = lossModel(mainModel.output) #you pass the output of one model to the other

fullModel = Model(mainModel.input,lossOut) #you create a model for training following a certain path in the graph. 

此模型将具有与 mainModel 和 lossModel 完全相同的权重,并且训练此模型会影响其他模型.

This model will have the exact same weights of mainModel and lossModel, and training this model will affect the other models.

在编译前确保 lossModel 不可训练:

Make sure lossModel is not trainable before compiling:

lossModel.trainable = False
for l in lossModel.layers:
    l.trainable = False

fullModel.compile(loss='mse',optimizer=....)

现在调整您的训练数据:

Now adjust your data for training:

fullYTrain = lossModel.predict(originalYTrain)

最后进行训练:

fullModel.fit(xTrain, fullYTrain, ....)

这篇关于VGG,keras 中的感知损失的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆