喀拉拉邦结合了两个损失和可调节的重量 [英] keras combining two losses with adjustable weights

查看:34
本文介绍了喀拉拉邦结合了两个损失和可调节的重量的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

因此,这里是详细说明.我有一个keras功能模型,具有两层,输出分别为x1和x2.

So here is the detail description. I have a keras functional model with two layers with outputs x1 and x2.

x1 = Dense(1,activation='relu')(prev_inp1)

x2 = Dense(2,activation='relu')(prev_inp2)

我需要使用这些x1和x2,合并/添加它们,并提供加权损失函数,如所附图像中所示.将相同损失"传播到两个分支. Alpha可以灵活地随迭代而变化

I need to use these x1 and x2, Merge/add Them and come up with weighted loss function like in the attached image. Propagate the 'same loss' into both branches. Alpha is flexible to vary with iterations

推荐答案

除非alpha依赖于两个分支,否则似乎无法将相同损失"传播到两个分支中.如果alpha不随两个分支而变化,则损失的一部分对于一个分支将是恒定的.

It seems that propagating the "same loss" into both branches will not take effect, unless alpha is dependent on both branches. If alpha is not variable depending on both branches, then part of the loss will be just constant to one branch.

因此,在这种情况下,只需将两个损失分开编译模型,然后将权重添加到编译方法中:

So, in this case, just compile the model with the two losses separate and add the weights to the compile method:

model.compile(optmizer='someOptimizer',loss=[loss1,loss2],loss_weights=[alpha,1-alpha])

当您需要更改Alpha值时再次进行编译.

Compile again when you need alpha to change.

但是如果确实alpha依赖于两个分支,那么您需要将结果连接起来并计算alpha的值:

But if indeed alpha is dependent on both branches, then you need to concatenate the results and calculate alpha's value:

singleOut = Concatenate()([x1,x2])

还有一个自定义损失函数:

And a custom loss function:

def weightedLoss(yTrue,yPred):
    x1True = yTrue[0]
    x2True = yTrue[1:]

    x1Pred = yPred[0]
    x2Pred = yPred[1:]

    #calculate alpha somehow with keras backend functions

    return (alpha*(someLoss(x1True,x1Pred)) + ((1-alpha)*(someLoss(x2True,x2Pred))

使用此功能编译:

model.compile(loss=weightedLoss, optimizer=....)

这篇关于喀拉拉邦结合了两个损失和可调节的重量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆