具有多个输入的Keras TimeDistributed层 [英] Keras TimeDistributed layer with multiple inputs
问题描述
我正在尝试使以下代码行起作用:
I'm trying to make the following lines of code working:
low_encoder_out = TimeDistributed( AutoregressiveDecoder(...) )([X_tf, embeddings])
其中AutoregressiveDecoder
是一个接受两个输入的自定义层.
经过一番谷歌搜索后,问题似乎出在TimeDistributed
包装器不接受多个输入.有些解决方案建议将两个输入合并后再将其输入到图层,但是由于它们的形状是
Where AutoregressiveDecoder
is a custom layer that takes two inputs.
After a bit of googling, the problem seems to be that the TimeDistributed
wrapper doesn't accept multiple inputs. There are solutions that proposes to merge the two inputs before feeding it to the layer, but since their shape is
X_tf.shape: (?, 16, 16, 128, 5)
embeddings.shape: (?, 16, 1024)
我真的不知道如何合并它们.有没有一种方法可以使TimeDistributed
层与多个输入一起工作?或者,是否可以通过一种很好的方式合并两个输入?
I really don't know how to merge them. Is there a way of having the TimeDistributed
layer to work with more than one input? Or, alternatively, is there any way to merge the two inputs in a nice way?
推荐答案
您提到的TimeDistributed
层不支持多个输入.考虑到所有输入的时间步数(即第二轴)必须相同这一事实(一种非常不错的解决方法)是将所有输入重塑为(None, n_timsteps, n_featsN)
,将它们连接起来,然后将其作为TimeDistributed
层的输入:
As you mentioned TimeDistributed
layer does not support multiple inputs. One (not-very-nice) workaround, considering the fact that the number of timesteps (i.e. second axis) must be the same for all the inputs, is to reshape all of them to (None, n_timsteps, n_featsN)
, concatenate them and then feed them as input of TimeDistributed
layer:
X_tf_r = Reshape((n_timesteps, -1))(X_tf)
embeddings_r = Reshape((n_timesteps, -1))(embeddings)
concat = concatenate([X_tf_r, embeddings_r])
low_encoder_out = TimeDistributed(AutoregressiveDecoder(...))(concat)
当然,您可能需要修改自定义层的定义,并在必要时将输入分开.
Of course, you might need to modify the definition of your custom layer and separate the inputs back if necessary.
这篇关于具有多个输入的Keras TimeDistributed层的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!