keras中的自定义损失函数实现问题 [英] Custom loss function implementation issue in keras

查看:457
本文介绍了keras中的自定义损失函数实现问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在keras中实现了自定义损失功能.模型的输出是10维softmax层.要计算损失:首先,我需要找到y触发1的索引,然后用真实值减去该值.我正在执行以下操作:

I am implementing a custom loss function in keras. The output of the model is 10 dimensional softmax layer. To calculate loss: first I need to find the index of y firing 1 and then subtract that value with true value. I'm doing the following:

from keras import backend as K

def diff_loss(y_true,y_pred):

    # find the indices of neuron firing 1
    true_ind=K.tf.argmax(y_true,axis=0)
    pred_ind=K.tf.argmax(y_pred,axis=0)

    # cast it to float32
    x=K.tf.cast(true_ind,K.tf.float32)
    y=K.tf.cast(pred_ind,K.tf.float32)

    return K.abs(x-y)

但给出错误"raise ValueError(不支持任何值.") ValueError:不支持任何值." 这是什么问题?

but it gives error "raise ValueError("None values not supported.") ValueError: None values not supported." What's the problem here?

推荐答案

之所以会发生这种情况,是因为您的函数不可区分.它是由常数组成的.

This happens because your function is not differentiable. It's made of constants.

如果您希望将argmax作为结果,则根本没有解决方案.

There is simply no solution for this if you want argmax as result.

由于您使用的是"softmax",这意味着只有一个类是正确的(您不能同时拥有两个类).

Since you're using "softmax", that means that only one class is correct (you don't have two classes at the same time).

由于您需要索引差异,因此也许可以使用一个连续的结果(连续的值是可微的).

And since you want index differences, maybe you could work with a single continuous result (continuous values are differentiable).

仅处理-0.5到9.5之间的一个输出,并通过四舍五入结果进行分类.

Work with only one output ranging from -0.5 to 9.5, and take the classes by rounding the result.

这样,您可以使最后一层只有一个单位:

That way, you can have the last layer with only one unit:

lastLayer = Dense(1,activation = 'sigmoid', ....) #or another kind if it's not dense    

并使用lambda图层更改范围:

And change the range with a lambda layer:

lambdaLayer = Lambda(lambda x: 10*x - 0.5)

现在您的损失可以是简单的'mae'(平均绝对错误).

Now your loss can be a simple 'mae' (mean absolute error).

此尝试的缺点是"Sigmoid"激活未在类之间平均分配.有些班级比其他班级更有可能.但是,因为有一个限制很重要,所以乍看起来似乎是最好的主意.

The downside of this attempt is that the 'sigmoid' activation is not evenly distributed between the classes. Some classes will be more probable than others. But since it's important to have a limit, it seems at first the best idea.

这仅在您的课程遵循逻辑递增顺序时才有效. (我想是的,否则您不会尝试这种损失,对吧?)

This will only work if you classes follow a logical increasing sequence. (I guess they do, otherwise you'd not be trying that kind of loss, right?)

这篇关于keras中的自定义损失函数实现问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆