如何在Keras中计算F1宏? [英] How to calculate F1 Macro in Keras?

查看:526
本文介绍了如何在Keras中计算F1宏?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在删除之前,我尝试使用Keras提供的代码.这是代码:

i've tried to use the codes given from Keras before they're removed. Here's the code :

def precision(y_true, y_pred):
    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
    precision = true_positives / (predicted_positives + K.epsilon())
    return precision

def recall(y_true, y_pred):
    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
    recall = true_positives / (possible_positives + K.epsilon())
    return recall

def fbeta_score(y_true, y_pred, beta=1):
    if beta < 0:
        raise ValueError('The lowest choosable beta is zero (only precision).')

    # If there are no true positives, fix the F score at 0 like sklearn.
    if K.sum(K.round(K.clip(y_true, 0, 1))) == 0:
        return 0

    p = precision(y_true, y_pred)
    r = recall(y_true, y_pred)
    bb = beta ** 2
    fbeta_score = (1 + bb) * (p * r) / (bb * p + r + K.epsilon())
    return fbeta_score

def fmeasure(y_true, y_pred):
    return fbeta_score(y_true, y_pred, beta=1)

从我所看到的(我是一个业余爱好者)看来,他们似乎使用了正确的公式.但是,当我尝试在训练过程中将其用作指标时,我得到的val_accuracy,val_precision,val_recall和val_fmeasure的输出完全相等.我确实相信即使公式正确也可能会发生,但是我相信这不太可能.这个问题有什么解释吗?谢谢

From what i saw (i'm an amateur in this), it seems like they use the correct formula. But, when i tried to use it as a metrics in the training process, I got exactly equal output for val_accuracy, val_precision, val_recall, and val_fmeasure. I do believe that it might happen even if the formula correct, but i believe it is unlikely. Any explanation for this issue? Thank you

推荐答案

因为Keras 2.0指标f1,精度和召回率已被删除.解决方案是使用自定义指标功能:

since Keras 2.0 metrics f1, precision, and recall have been removed. The solution is to use a custom metric function:

from keras import backend as K

def f1(y_true, y_pred):
    def recall(y_true, y_pred):
        """Recall metric.

        Only computes a batch-wise average of recall.

        Computes the recall, a metric for multi-label classification of
        how many relevant items are selected.
        """
        true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
        possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
        recall = true_positives / (possible_positives + K.epsilon())
        return recall

    def precision(y_true, y_pred):
        """Precision metric.

        Only computes a batch-wise average of precision.

        Computes the precision, a metric for multi-label classification of
        how many selected items are relevant.
        """
        true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
        predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
        precision = true_positives / (predicted_positives + K.epsilon())
        return precision
    precision = precision(y_true, y_pred)
    recall = recall(y_true, y_pred)
    return 2*((precision*recall)/(precision+recall+K.epsilon()))


model.compile(loss='binary_crossentropy',
          optimizer= "adam",
          metrics=[f1])

此函数的返回行

return 2*((precision*recall)/(precision+recall+K.epsilon()))

通过加常数ε来修改

,以避免被0除.因此,将不计算NaN.

was modified by adding the constant epsilon, in order to avoid division by 0. Thus NaN will not be computed.

这篇关于如何在Keras中计算F1宏?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆