keras 中使用类权重的 U-net 自定义损失函数:3+ 维目标不支持“class_weight" [英] Custom loss function for U-net in keras using class weights: `class_weight` not supported for 3+ dimensional targets

查看:44
本文介绍了keras 中使用类权重的 U-net 自定义损失函数:3+ 维目标不支持“class_weight"的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这是我正在使用的代码(主要从 Kaggle 中提取):

Here's the code I'm working with (pulled from Kaggle mostly):

inputs = Input((IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS))
...
outputs = Conv2D(4, (1, 1), activation='sigmoid') (c9)

model = Model(inputs=[inputs], outputs=[outputs])
model.compile(optimizer='adam', loss='dice', metrics=[mean_iou])

results = model.fit(X_train, Y_train, validation_split=0.1, batch_size=8, epochs=30, class_weight=class_weights)

我有 4 个非常不平衡的类.A 级等于 70%,B 级 = 15%,C 级 = 10%,D 级 = 5%.但是,我最关心 D 类.所以我做了以下类型的计算:D_weight = A/D = 70/5 = 14 等等 B 类和 A 类的权重.(如果有更好的方法来选择这些权重,然后随意)

I have 4 classes that are very imbalanced. Class A equals 70%, class B = 15%, class C = 10%, and class D = 5%. However, I care most about class D. So I did the following type of calculations: D_weight = A/D = 70/5 = 14 and so on for the weight for class B and A. (if there are better methods to select these weights, then feel free)

在最后一行中,我正在尝试正确设置 class_weights 并且我正在这样做:class_weights = {0: 1.0, 1: 6, 2: 7, 3: 14}.

In the last line, I'm trying to properly set class_weights and I'm doing it as so: class_weights = {0: 1.0, 1: 6, 2: 7, 3: 14}.

但是,当我执行此操作时,出现以下错误.

However, when I do this, I get the following error.

class_weight 不支持 3 维以上的目标.

class_weight not supported for 3+ dimensional targets.

是否有可能在最后一层之后添加一个密集层并将其用作虚拟层,以便我可以传递 class_weights,然后仅使用最后一个 conv2d 层的输出来进行预测?

Is it possible that I add a dense layer after the last layer and just use it as a dummy layer so I can pass the class_weights and then only use the output of the last conv2d layer to do the prediction?

如果这是不可能的,我将如何修改损失函数(我知道这个post,但是,只是将权重传递给损失函数不会削减它,因为每个类分别调用损失函数)?目前,我正在使用以下损失函数:

If this is not possible, how would I modify the loss function (I'm aware of this post, however, just passing in the weights in to the loss function won't cut it, because the loss function is called separately for each class) ? Currently, I'm using the following loss function:

def dice_coef(y_true, y_pred):
    smooth = 1.
    y_true_f = K.flatten(y_true)
    y_pred_f = K.flatten(y_pred)
    intersection = K.sum(y_true_f * y_pred_f)
    return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)

def bce_dice_loss(y_true, y_pred):
    return 0.5 * binary_crossentropy(y_true, y_pred) - dice_coef(y_true, y_pred)

但是我没有看到可以输入类权重的任何方式.如果有人想要完整的工作代码,请参阅此帖子.但是记得将最终的 conv2d 层的 num classes 改为 4 而不是 1.

But I don't see any way in which I can input class weights. If someone wants the full working code see this post. But remember to change the final conv2d layer's num classes to 4 instead of 1.

推荐答案

您始终可以自己应用权重.

You can always apply the weights yourself.

下面的originalLossFunc可以从keras.losses导入.
weightsList 是按类排序的权重列表.

The originalLossFunc below you can import from keras.losses.
The weightsList is your list with the weights ordered by class.

def weightedLoss(originalLossFunc, weightsList):

    def lossFunc(true, pred):

        axis = -1 #if channels last 
        #axis=  1 #if channels first


        #argmax returns the index of the element with the greatest value
        #done in the class axis, it returns the class index    
        classSelectors = K.argmax(true, axis=axis) 
            #if your loss is sparse, use only true as classSelectors

        #considering weights are ordered by class, for each class
        #true(1) if the class index is equal to the weight index   
        classSelectors = [K.equal(i, classSelectors) for i in range(len(weightsList))]

        #casting boolean to float for calculations  
        #each tensor in the list contains 1 where ground true class is equal to its index 
        #if you sum all these, you will get a tensor full of ones. 
        classSelectors = [K.cast(x, K.floatx()) for x in classSelectors]

        #for each of the selections above, multiply their respective weight
        weights = [sel * w for sel,w in zip(classSelectors, weightsList)] 

        #sums all the selections
        #result is a tensor with the respective weight for each element in predictions
        weightMultiplier = weights[0]
        for i in range(1, len(weights)):
            weightMultiplier = weightMultiplier + weights[i]


        #make sure your originalLossFunc only collapses the class axis
        #you need the other axes intact to multiply the weights tensor
        loss = originalLossFunc(true,pred) 
        loss = loss * weightMultiplier

        return loss
    return lossFunc

<小时>

用于在 compile 中使用它:

model.compile(loss= weightedLoss(keras.losses.categorical_crossentropy, weights), 
              optimizer=..., ...)

直接在输入数据上改变类平衡

您也可以更改输入样本的平衡.

Changing the class balance directly on the input data

You can change the balance of the input samples too.

例如,如果您有来自第 1 类的 5 个样本和来自第 2 类的 10 个样本,则在输入数组中将第 5 类的样本传递两次.

For instance, if you have 5 samples from class 1 and 10 samples from class 2, pass the samples for class 5 twice in the input arrays.

.

除了按班级"工作,您还可以按样本"工作.

Instead of working "by class", you can also work "by sample".

为输入数组中的每个样本创建一个权重数组:len(x_train) == len(weights)

Create an array of weights for each sample in your input array: len(x_train) == len(weights)

并且 fit 将此数组传递给 sample_weight 参数.
(如果是 fit_generator,生成器将必须返回权重以及训练/真实对:return/yield 输入、目标、权重)

And fit passing this array to the sample_weight argument.
(If it's fit_generator, the generator will have to return the weights along with the train/true pairs: return/yield inputs, targets, weights)

这篇关于keras 中使用类权重的 U-net 自定义损失函数:3+ 维目标不支持“class_weight"的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆