模型的输出张量必须是Keras张量 [英] Output tensors to a Model must be Keras tensors

查看:350
本文介绍了模型的输出张量必须是Keras张量的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图从两个模型输出之间的差异中学习一个模型.所以我做了如下代码.但发生错误,请阅读:

I was trying to make a model learned from difference between two model output. So I made code like below. But it occurred error read:

TypeError:输出到模型的张量必须是Keras张量.成立: Tensor("sub:0",shape =(?, 10),dtype = float32)

TypeError: Output tensors to a Model must be Keras tensors. Found: Tensor("sub:0", shape=(?, 10), dtype=float32)

我找到了包括lambda在内的相关答案,但是我无法解决此问题. 有人知道这个问题吗? 可能会看到将张量转换为keras的张量.

I have found related answer including lambda, but I couldn't solve this issue. Does anyone know this issue? It might be seen converting tensor to keras's tensor.

提前谢谢.

from keras.layers import Dense
from keras.models import Model
from keras.models import Sequential

left_branch = Sequential()
left_branch.add(Dense(10, input_dim=784))

right_branch = Sequential()
right_branch.add(Dense(10, input_dim=784))

diff = left_branch.output - right_branch.output

model = Model(inputs=[left_branch.input, right_branch.input], outputs=[diff])
model.compile(optimizer='rmsprop', loss='binary_crossentropy', loss_weights=[1.])

model.summary(line_length=150)

推荐答案

最好让所有操作都由一个层完成,不要减去这样的输出(我不会冒与文档不同的方式隐藏错误的风险期望):

It's better to keep all operations done by a layer, do not subtract outputs like that (I wouldn't risk hidden errors for doing things differently from what the documentation expects):

from keras.layers import *

def negativeActivation(x):
    return -x

left_branch = Sequential()
left_branch.add(Dense(10, input_dim=784))

right_branch = Sequential()
right_branch.add(Dense(10, input_dim=784))

negativeRight = Activation(negativeActivation)(right_branch.output) 
diff = Add()([left_branch.output,negativeRight])

model = Model(inputs=[left_branch.input, right_branch.input], outputs=diff)
model.compile(optimizer='rmsprop', loss='binary_crossentropy', loss_weights=[1.])

当加入这样的模型时,我更喜欢使用Model的分层方式,而不是使用Sequential:

When joining models like that, I do prefer using the Model way of doing it, with layers, instead of using Sequential:

def negativeActivation(x):
    return -x

leftInput = Input((784,))
rightInput = Input((784,))

left_branch = Dense(10)(leftInput) #Dense(10) creates a layer
right_branch = Dense(10)(rightInput) #passing the input creates the output

negativeRight = Activation(negativeActivation)(right_branch) 
diff = Add()([left_branch,negativeRight])

model = Model(inputs=[leftInput, rightInput], outputs=diff)
model.compile(optimizer='rmsprop', loss='binary_crossentropy', loss_weights=[1.])

这样,您可以创建具有相同图层的其他模型,它们将共享相同的权重:

With this, you can create other models with the same layers, they will share the same weights:

leftModel = Model(leftInput,left_branch)
rightModel = Model(rightInput,right_branch)
fullModel = Model([leftInput,rightInput],diff)

如果其中一个共享同一层,则对其中一个进行培训将影响其他两个. 例如,您可以在编译(或再次编译以进行训练)之前制作left_branch.trainable = False来训练整个模型中的正确部分.

Training one of them will affect the others if they share the same layer. You can train just the right part in the full model by making left_branch.trainable = False before compiling (or compile again for training), for instance.

这篇关于模型的输出张量必须是Keras张量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆