两个卷积层之间的互连 [英] Interconnection between two Convolutional Layers

查看:191
本文介绍了两个卷积层之间的互连的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个关于CNN中两个卷积层之间互连的问题。例如,假设我有这样的体系结构:



输入:28 x 28



conv1:3 x 3过滤器,不。滤镜数量:16



conv2:3 x 3滤镜,否。的滤镜数量:32



在conv1之后,我们假设图像尺寸未减小,则输出为16 x 28 x 28。因此,我们有16个功能图。在下一层中,每个特征图都连接到下一层,这意味着如果我们将每个特征图(28 x 28)视为一个神经元,那么每个神经元将被连接到所有32个滤镜,意味着总计
(3 x 3 x 16) x 32个参数这两个层如何堆叠或互连?在人工神经网络的情况下,我们在两层之间具有权重。在CNN中也有类似的东西吗?如何将一个卷积层的输出馈送到下一个卷积层?

解决方案

具有 n 个过滤器,大小为 k×k ,该过滤器在 f 功能之后地图是

  n⋅(f⋅k⋅k + 1)

其中 +1 来自偏差。



因此每个 f 过滤器都不是形状 k×k×1 而是形状是 k×k×f


一个卷积层的输出如何馈送到下一个卷积层?


就像输入被馈送到第一个卷积层一样。没有区别(要素地图的数量除外)。



对一张输入要素地图的卷积



< img src = https://raw.githubusercontent.com/vdumoulin/conv_arithmetic/master/gif/same_padding_no_strides.gif alt =>



图片来源:< a href = https://github.com/vdumoulin/conv_arithmetic rel = nofollow noreferrer> https://github.com/vdumoulin/conv_arithmetic



另请参见:另一个动画



多个输入要素图



工作原理相同:




  • 过滤器的深度与输入的深度相同。

  • 您仍然可以在所有(x,y)位置上滑动滤镜。对于每个头寸,它给出一个输出。



您的示例




  • 第一转换层:160 = 16 *(3 * 3 + 1)

  • 第二转换层:4640 = 32 *(16 * 3 * 3 + 1)


I have a question regarding interconnection between two convolutional layers in CNN. for example suppose I have architecture like this:

input: 28 x 28

conv1: 3 x 3 filter, no. of filters : 16

conv2: 3 x 3 filter, no. of filters : 32

after conv1 we get output as 16 x 28 x 28 assuming dimension of image is not reduced. So we have 16 feature maps. In the next layer each feature map is connected to next layer means if we consider each feature map(28 x 28) as a neuron then each neuron will be connected to all 32 filters means total (3 x 3 x 16) x 32 parameters. How these two layers are stacked or interconnected? In the case of Artificial Neural Network we have weights between two layers. Is there something like this in CNN also? How the output of one convolutional layer is fed to the next convolutional layer?

解决方案

The number of parameters of a convolutional layer with n filters of size k×k which comes after f feature maps is

n ⋅ (f ⋅ k ⋅ k + 1)

where the +1 comes from the bias.

Hence each of the f filters is not of shape k×k×1 but of shape k×k×f.

How the output of one convolutional layer is fed to the next convolutional layer?

Just like the input is fed to the first convolutional layer. There is no difference (except the number of feature maps).

Convolution on one input feature map

Image source: https://github.com/vdumoulin/conv_arithmetic

See also: another animation

Multiple input feature maps

It works the same:

  • The filter has the same depth as the input. Before it was 1, now it is more.
  • You still slide the filter over all (x, y) positions. For each position, it gives one output.

Your example

  • First conv layer: 160 = 16*(3*3+1)
  • Second conv layer: 4640 = 32*(16*3*3+1)

这篇关于两个卷积层之间的互连的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆