Keras中用于验证集的不同损失函数 [英] Different loss function for validation set in Keras

查看:493
本文介绍了Keras中用于验证集的不同损失函数的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有不平衡的training数据集,这就是为什么我构建了自定义weighted categorical cross entropy loss函数的原因.但是问题是我的validation集是平衡集,我想使用常规的分类交叉熵损失.那么我可以在Keras中传递不同的损失函数来进行验证设置吗?我的意思是训练有素的训练者,而训练集的训练者呢?

I have unbalanced training dataset, thats why I built custom weighted categorical cross entropy loss function. But the problem is my validation set is balanced one and I want to use the regular categorical cross entropy loss. So can I pass different loss function for validation set within Keras? I mean the wighted one for training and regular one for validation set?

def weighted_loss(y_pred, y_ture):
 '
 '
 '


return loss

model.compile(loss= weighted_loss, metric='accuracy')

推荐答案

您可以尝试使用后端功能K.in_train_phase(),该功能由DropoutBatchNormalization层用于在培训和验证中实现不同的行为.

You can try the backend function K.in_train_phase(), which is used by the Dropout and BatchNormalization layers to implement different behaviors in training and validation.

def custom_loss(y_true, y_pred):
    weighted_loss = ... # your implementation of weighted crossentropy loss
    unweighted_loss = K.sparse_categorical_crossentropy(y_true, y_pred)
    return K.in_train_phase(weighted_loss, unweighted_loss)

K.in_train_phase()的第一个参数是训练阶段使用的张量,第二个参数是测试阶段使用的张量.

The first argument of K.in_train_phase() is the tensor used in training phase, and the second is the one used in test phase.

例如,如果我们将weighted_loss设置为0(只是为了验证K.in_train_phase()函数的效果):

For example, if we set weighted_loss to 0 (just to verify the effect of K.in_train_phase() function):

def custom_loss(y_true, y_pred):
    weighted_loss = 0 * K.sparse_categorical_crossentropy(y_true, y_pred)
    unweighted_loss = K.sparse_categorical_crossentropy(y_true, y_pred)
    return K.in_train_phase(weighted_loss, unweighted_loss)

model = Sequential([Dense(100, activation='relu', input_shape=(100,)), Dense(1000, activation='softmax')])
model.compile(optimizer='adam', loss=custom_loss)
model.outputs[0]._uses_learning_phase = True  # required if no dropout or batch norm in the model

X = np.random.rand(1000, 100)
y = np.random.randint(1000, size=1000)
model.fit(X, y, validation_split=0.1)

Epoch 1/10
900/900 [==============================] - 1s 868us/step - loss: 0.0000e+00 - val_loss: 6.9438

如您所见,训练阶段的损失确实是1乘以0.

As you can see, the loss in training phase is indeed the one multiplied by 0.

请注意,如果模型中没有辍学或批处理规范,则需要手动打开" _uses_learning_phase布尔开关,否则K.in_train_phase()默认情况下将无效.

Note that if there's no dropout or batch norm in your model, you'll need to manually "turn on" the _uses_learning_phase boolean switch, otherwise K.in_train_phase() will have no effect by default.

这篇关于Keras中用于验证集的不同损失函数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆