当有多个输出时,如何仅在一个输出上训练网络? [英] How to train the network only on one output when there are multiple outputs?

查看:137
本文介绍了当有多个输出时,如何仅在一个输出上训练网络?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在Keras中使用了多输出模型

I am using a multiple output model in Keras

model1 = Model(input=x, output=[y2, y3])

model1.compile((optimizer='sgd', loss=cutom_loss_function)

我的custom_loss函数是

def custom_loss(y_true, y_pred):
   y2_pred = y_pred[0]
   y2_true = y_true[0]

   loss = K.mean(K.square(y2_true - y2_pred), axis=-1)
   return loss

我只想在输出y2上训练网络.

I only want to train the network on output y2.

使用多个输出时,损失函数中y_predy_true自变量的形状/结构是什么? 我可以像上面一样访问它们吗?是y_pred[0]还是y_pred[:,0]?

What is the shape/structure of the y_pred and y_true argument in loss function when multiple outputs are used? Can I access them as above? Is it y_pred[0] or y_pred[:,0]?

推荐答案

我只想在输出y2上训练网络.

I only want to train the network on output y2.

基于 Keras功能性API指南,您可以使用

model1 = Model(input=x, output=[y2,y3])   
model1.compile(optimizer='sgd', loss=custom_loss_function,
                  loss_weights=[1., 0.0])

损失中的y_pred和y_true参数的形状/结构是什么 使用多个输出时的功能?我可以像上面一样访问它们吗? 是y_pred [0]还是y_pred [:,0]

What is the shape/structure of the y_pred and y_true argument in loss function when multiple outputs are used? Can I access them as above? Is it y_pred[0] or y_pred[:,0]

在keras多输出模型中,损耗函数分别应用于每个输出.用伪代码:

In keras multi-output models loss function is applied for each output separately. In pseudo-code:

loss = sum( [ loss_function( output_true, output_pred ) for ( output_true, output_pred ) in zip( outputs_data, outputs_model ) ] )

对我来说,在多个输出上执行损失函数的功能似乎不可用.通过将丢失功能纳入网络的一层,可能可以实现这一目标.

The functionality to do loss function on multiple outputs seems unavailable to me. One probably could achieve that by incorporating the loss function as a layer of the network.

这篇关于当有多个输出时,如何仅在一个输出上训练网络?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆