Keras中的自定义损失函数可惩罚假阴性 [英] Custom loss function in Keras to penalize false negatives

查看:954
本文介绍了Keras中的自定义损失函数可惩罚假阴性的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在研究一个医学数据集,在此我要尽可能减少假阴性.对我来说,实际上没有疾病的时候疾病"的预测对我来说是可以的,但是对实际上没有疾病的时候没有疾病"的预测却对我来说不是.也就是说,我对FP没问题,但对FN没问题.

I am working on a medical dataset where I am trying to have as less false negatives as possible. A prediction of "disease when actually no disease" is okay for me but a prediction "no disease when actually a disease" is not. That is, I am okay with FP but not FN.

经过研究,我发现了Keeping higher learning rate for one classusing class weightsensemble learning with specificity/sensitivity等方法.

After doing some research, I found out ways like Keeping higher learning rate for one class, using class weights,ensemble learning with specificity/sensitivity etc.

我使用类权重(如class_weight = {0 : 0.3,1: 0.7})并调用model.fit(class_weights=class_weight)达到了接近预期的结果.这给了我非常低的FN,却给了我很高的FP.我正在尝试尽可能减少FP,以保持FN非常低.

I achieved the near desired result using class weights like class_weight = {0 : 0.3,1: 0.7} and then calling the model.fit(class_weights=class_weight). This gave me very low FN but a pretty high FP. I am trying to reduce FP as much as possible keeping FN very low.

我正在努力使用Keras编写自定义损失函数,这将有助于我惩罚误报.感谢您的帮助.

I am struggling to write a custom loss function using Keras which will help me to penalize the false negatives. Thanks for the help.

推荐答案

我将简要介绍我们要解决的概念.

I'll briefly introduce the concepts we're trying to tackle.

所有为阳性的中,我们的模型预测有多少为阳性?

所有肯定的结果=

我们的模型说的是肯定的=

What our model said were positive =

由于召回与FN成反比,因此提高召回率会降低FN.

Since recall is inversely proportional to FN, improving it decreases FN.

所有为阴性的中,我们的模型预测有多少为阴性?

所有否定=

我们的模型说的是否定的=

What our model said were negative =

由于召回与FP成反比,因此提高召回率会降低FP.

Since recall is inversely proportional to FP, improving it decreases FP.

在您的下一次搜索中,或您执行的与分类相关的任何活动中,知道这些都会使您在交流和理解上更具优势.

In your next searches, or whatever classification-related activity you perform, knowing these is going to give you an extra edge in communication and understanding.

所以.正如您已经意识到的,这两个概念是相反的.这意味着增加一个可能会减少另一个.

So. These two concepts, as you mas have figured out already, are opposites. This means that increasing one is likely to decrease the other.

由于您希望在召回时获得优先级,但又不想过多地放弃其具体性,因此可以将两者和属性权重结合起来.遵循这个答案:

Since you want priority on recall, but don't want to loose too much in specificity, you can combine both of those and attribute weights. Following what's clearly explained in this answer:

import numpy as np
import keras.backend as K

def binary_recall_specificity(y_true, y_pred, recall_weight, spec_weight):

    TN = np.logical_and(K.eval(y_true) == 0, K.eval(y_pred) == 0)
    TP = np.logical_and(K.eval(y_true) == 1, K.eval(y_pred) == 1)

    FP = np.logical_and(K.eval(y_true) == 0, K.eval(y_pred) == 1)
    FN = np.logical_and(K.eval(y_true) == 1, K.eval(y_pred) == 0)

    # Converted as Keras Tensors
    TN = K.sum(K.variable(TN))
    FP = K.sum(K.variable(FP))

    specificity = TN / (TN + FP + K.epsilon())
    recall = TP / (TP + FN + K.epsilon())

    return 1.0 - (recall_weight*recall + spec_weight*specificity)

注意recall_weightspec_weight吗?它们是我们归因于每个指标的权重.对于发行约定,它们应始终添加到1.0¹中,例如recall_weight=0.9specificity_weight=0.1.目的是让您了解最适合您需求的比例.

Notice recall_weight and spec_weight? They're weights we're attributing to each of the metrics. For distribution convention, they should always add to 1.0¹, e.g. recall_weight=0.9, specificity_weight=0.1. The intention here is for you to see what proportion best suits your needs.

但是Keras的损失函数必须只接受(y_true, y_pred)作为参数,所以让我们定义一个包装器:

But Keras' loss functions must only receive (y_true, y_pred) as arguments, so let's define a wrapper:

# Our custom loss' wrapper
def custom_loss(recall_weight, spec_weight):

    def recall_spec_loss(y_true, y_pred):
        return binary_recall_specificity(y_true, y_pred, recall_weight, spec_weight)

    # Returns the (y_true, y_pred) loss function
    return recall_spec_loss

在使用它之前,我们要

# Build model, add layers, etc
model = my_model
# Getting our loss function for specific weights
loss = custom_loss(recall_weight=0.9, spec_weight=0.1)
# Compiling the model with such loss
model.compile(loss=loss)

¹权重总和必须为1.0,因为对于recall=1.0specificity=1.0(完美分数),公式

¹ The weights, added, must total 1.0, because in case both recall=1.0 and specificity=1.0 (the perfect score), the formula

例如,应该给我们

很显然,如果我们得到完美的分数,我们希望损失等于0.

Clearly, if we got the perfect score, we'd want our loss to equal 0.

这篇关于Keras中的自定义损失函数可惩罚假阴性的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆