Keras 自定义损失函数:访问当前输入模式 [英] Keras custom loss function: Accessing current input pattern

查看:39
本文介绍了Keras 自定义损失函数:访问当前输入模式的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在 Keras(带有 Tensorflow 后端)中,当前输入模式是否可用于我的自定义损失函数?

当前输入模式被定义为用于产生预测的输入向量.例如,考虑以下内容:X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42, shuffle=False).那么当前输入模式是与 y_train 相关联的当前 X_train 向量(在损失函数中称为 y_true).

在设计自定义损失函数时,我打算优化/最小化需要访问当前输入模式的值,而不仅仅是当前预测.

我浏览了 https://github.com/fchollet/keras/blob/master/keras/losses.py

我还查看了不仅仅是 y_pred、y_true 的成本函数?"

我也熟悉之前的示例来生成自定义的损失函数:

将 keras.backend 导入为 Kdef customLoss(y_true,y_pred):返回 K.sum(K.log(y_true) - K.log(y_pred))

推测 (y_true,y_pred) 是在别处定义的.我查看了源代码但没有成功,我想知道我是否需要自己定义当前输入模式,或者我的损失函数是否已经可以访问它.

您可以将损失函数包装为一个内部函数并将您的输入张量传递给它(通常在向损失函数传递附加参数时会这样做).

>

def custom_loss_wrapper(input_tensor):def custom_loss(y_true, y_pred):返回 K.binary_crossentropy(y_true, y_pred) + K.mean(input_tensor)返回custom_lossinput_tensor = Input(shape=(10,))hidden = Dense(100, activation='relu')(input_tensor)out = Dense(1, activation='sigmoid')(hidden)模型 = 模型(输入张量,输出)模型.编译(损失=custom_loss_wrapper(input_tensor),优化器='亚当')

您可以验证 input_tensor 和损失值(主要是 K.mean(input_tensor) 部分)会随着 X 的不同而变化传递给模型.

X = np.random.rand(1000, 10)y = np.random.randint(2, size=1000)model.test_on_batch(X, y) # =>1.1974642X *= 1000model.test_on_batch(X, y) # =>511.15466

In Keras (with Tensorflow backend), is the current input pattern available to my custom loss function?

The current input pattern is defined as the input vector used to produce the prediction. For example, consider the following: X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42, shuffle=False). Then the current input pattern is the current X_train vector associated with the y_train (which is termed y_true in the loss function).

When designing a custom loss function, I intend to optimize/minimize a value that requires access to the current input pattern, not just the current prediction.

I've taken a look through https://github.com/fchollet/keras/blob/master/keras/losses.py

I've also looked through "Cost function that isn't just y_pred, y_true?"

I am also familiar with previous examples to produce a customized loss function:

import keras.backend as K

def customLoss(y_true,y_pred):
    return K.sum(K.log(y_true) - K.log(y_pred))

Presumably (y_true,y_pred) are defined elsewhere. I've taken a look through the source code without success and I'm wondering whether I need to define the current input pattern myself or whether this is already accessible to my loss function.

解决方案

You can wrap the loss function as a inner function and pass your input tensor to it (as commonly done when passing additional arguments to the loss function).

def custom_loss_wrapper(input_tensor):
    def custom_loss(y_true, y_pred):
        return K.binary_crossentropy(y_true, y_pred) + K.mean(input_tensor)
    return custom_loss

input_tensor = Input(shape=(10,))
hidden = Dense(100, activation='relu')(input_tensor)
out = Dense(1, activation='sigmoid')(hidden)
model = Model(input_tensor, out)
model.compile(loss=custom_loss_wrapper(input_tensor), optimizer='adam')

You can verify that input_tensor and the loss value (mostly, the K.mean(input_tensor) part) will change as different X is passed to the model.

X = np.random.rand(1000, 10)
y = np.random.randint(2, size=1000)
model.test_on_batch(X, y)  # => 1.1974642

X *= 1000
model.test_on_batch(X, y)  # => 511.15466

这篇关于Keras 自定义损失函数:访问当前输入模式的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆