如何在Keras中使用Conv2D在5D张量的最后三个维度上应用卷积? [英] How to apply convolution on the last three dimensions of a 5D tensor using the Conv2D in Keras?
问题描述
通常,Keras中Conv2D
的输入张量是尺寸为batch_size * n * n * channel_size
的4D张量.现在我有一个尺寸为batch_size * N * n * n * channel_size
的5D张量,我想为N
中每个i的最后三个尺寸应用2D卷积层.例如,如果内核大小为1,则我希望输出的尺寸为batch_size * N * n * n * 1
.
Usually the input tensor of the Conv2D
in Keras is a 4D tensor with the dimension batch_size * n * n * channel_size
. Now I have a 5D tensor with the dimension batch_size * N * n * n * channel_size
and I want to apply the 2D convolutional layer for the last three dimensions for each i in N
. For example, if the kernel size is 1, then I expect that the output will have the dimension batch_size * N * n * n * 1
.
任何人都知道一些使用Keras实施它的简单方法吗?
Anyone knows some easy ways to implement it with Keras?
例如,对于完全连接的层,Keras可以自动完成.如果输入的形状为batch_size * N * n
,则Keras中的Dense层将为N
中的每个i设置一个FC层.因此,如果设置Dense(m)
,我们将使用batch_size * N * m
获得输出.
For example, for the fully-connected layer Keras can do it automatically. If the input has the shape batch_size * N * n
, then the Dense layer in Keras will set a FC layer for each i in N
. Hence we will get the output with batch_size * N * m
, if we set Dense(m)
.
推荐答案
您可以使用
You can use the TimeDistributed
layer wrapper to apply the same convolution layer on all the images in the 5D tensor. For example:
model = Sequential()
model.add(TimeDistributed(Conv2D(5, (3,3), padding='same'), input_shape=(10, 100, 100, 3)))
model.summary()
模型摘要:
Layer (type) Output Shape Param #
=================================================================
time_distributed_2 (TimeDist (None, 10, 100, 100, 5) 140
=================================================================
Total params: 140
Trainable params: 140
Non-trainable params: 0
_________________________________________________________________
这篇关于如何在Keras中使用Conv2D在5D张量的最后三个维度上应用卷积?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!