keras合并连接失败,因为输入形状不同,即使输入形状相同 [英] keras merge concatenate failed because of different input shape even though input shape are the same

查看:188
本文介绍了keras合并连接失败,因为输入形状不同,即使输入形状相同的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图将4个不同的层连接成一个层,以输入到模型的下一部分.我正在使用Keras功能API,代码如下所示.

I am trying to concatenate 4 different layers into one layer to input into the next part of my model. I am using the Keras functional API and the code is shown below.

# Concat left side 4 inputs and right side 4 inputs
print(lc,l1_conv_net,l2_conv_net,l3_conv_net)
left_combined = merge.Concatenate()([lc, l1_conv_net, l2_conv_net, l3_conv_net])

发生此错误,表示我的输入形状不相同.但是,我也打印了输入的形状,除了沿着concat轴(它是shape [1],因为shape [0] =?是批处理中的示例数)外,它似乎是相同的.

This errors occurs which says that my input shape is not the same. However, I also printed the input shape and it is seems to be the same except along the concat axis (which is the shape[1] since shape[0]=? is the number of examples in the batch).

Tensor("input_1:0", shape=(?, 6), dtype=float32) Tensor("add_3/add_1:0", shape=(?, 100), dtype=float32) Tensor("add_6/add_1:0", shape=(?, 100), dtype=float32) Tensor("add_9/add_1:0", shape=(?, 100), dtype=float32)

ValueError: A `Concatenate` layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 6), (None, 7, 62), (None, 23, 62), (None, 2, 62)]

巧合的是,形状(None,7、62),(None,23、62),(None,2、62)是另一个自定义keras层的输入张量形状,它生成l1_conv_net,如下所示:

Coincidentally, the shape (None, 7, 62), (None, 23, 62), (None, 2, 62) is the input tensor shape for another custom keras layer which produces l1_conv_net as shown below:

l1_conv_net = build_graph_conv_net_fp_only([l1x, l1y, l1z],
                                                   conv_layer_sizes=self.conv_width,
                                                   fp_layer_size=self.fp_length,
                                                   conv_activation='relu', fp_activation='softmax')

因此,打印语句说形状为(?,6),(?, 100),(?, 100),(?, 100),但是keras合并函数将其读取为[[None,6), (None,7,62),(None,23,62),(None,2,62)]?为什么会这样?

So the print statement says that the shape is (?,6), (?,100) , (?,100) , (?,100) but the keras merge function reads it as [(None, 6), (None, 7, 62), (None, 23, 62), (None, 2, 62)] ? Why is this so?

谢谢!

推荐答案

所以....如果消息说您正在使用这些形状,则无法连接....

So.... if the message says you're using these shapes, then you can't concatenate....

[(None, 6), (None, 7, 62), (None, 23, 62), (None, 2, 62)] 

您可以尝试将后三个连接起来:

You can try to concatenate the last three:

left_combined = keras.layers.Concatenate(axis=1)([l1_conv_net, l2_conv_net, l3_conv_net])

不打印张量,打印K.int_shape(tensor)以查看实际形状. (顺便说一下,您发布的内容确实出了问题,因为张量的形状太怪异.如果使用一维卷积或RNN,则Keras形状是有意义的.)

Don't print tensors, print K.int_shape(tensor) to see the actual shapes. (By the way, something is really going wrong with what you posted because the shapes of the tensors are too weird. The Keras shapes make sense if you're using 1D convolutions or RNNs)

如果您的后端不是张量流,则您的自定义层或lambda层中的output_shape参数可能不正确.

If your backend is not tensorflow, you may have wrong output_shape parameters in custom or lambda layers somewhere.

这篇关于keras合并连接失败,因为输入形状不同,即使输入形状相同的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆