没有LSTM的Keras时间分布层 [英] Keras TimeDistributed layer without LSTM

查看:88
本文介绍了没有LSTM的Keras时间分布层的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是新来的喀拉拉邦人.这是我想要达到的目标.我有一个keras模型,该模型将输入图像以产生512向量作为输入.我将其创建为:

I am kind of new to keras. Here is what I am trying to achieve. I have a keras-model which takes as input an image to produce 512 vector. I create this as:

input_img = keras.layers.Input( shape=(240, 320, 3 ) )
cnn = make_vgg( input_img )
out = NetVLADLayer(num_clusters = 16)( cnn )
model = keras.models.Model( inputs=input_img, outputs=out )

现在,为了进行培训,我的每个样本实际上都是13张图像.假设我有2500个样本,那么我的数据尺寸为2500x13x240x320x3. 我希望将模型独立应用于13张图像.我在喀拉拉邦遇到了TimeDistributed层,想知道如何使用它来实现我的目标.

Now, for the training, each of my samples are actually 13 images. Say I have 2500 samples then my data's dimensions are 2500x13x240x320x3. I want the model to be applied independently to the 13 images. I came across the TimeDistributed layer in keras and wondering how can I use it to achieve my objective.

t_input = Input( shape=(13,240,320,3) )
# How to use TimeDistributed with model? 
t_out = TimeDistributed( out )
t_model = Model( inputs=t_input, outputs=t_out )

我期望尺寸为t_out:无,13,512.但是,上面的代码将引发ValueError.谁能帮助我理解?

I am expecting t_out of size: None,13,512. The above code, however, throws a ValueError. Can anyone help my understanding?

推荐答案

此行中发生错误:

t_out = TimeDistributed(out)

发生这种情况是因为out是张量,但是 TimeDistributed 期望将图层作为争论.该层将应用于输入的每个时间片(索引一的维).您可以改为执行以下操作:

It happens because out is a tensor, but TimeDistributed expects a layer as argument. This layer will be applied to every temporal slice (dimension of index one) of the input. You could instead do the following:

t_input = Input(shape=(13, 240, 320, 3))
t_out = TimeDistributed(model)(t_input)
t_model = Model(inputs=t_input, outputs=t_out)

这篇关于没有LSTM的Keras时间分布层的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆