为什么在张量流反卷积过程中必须指定输出形状? [英] Why do we have to specify output shape during deconvolution in tensorflow?

查看:111
本文介绍了为什么在张量流反卷积过程中必须指定输出形状?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

TF文档在 tf.conv2d_transpose 中具有output_shape参数.为什么需要这个?跨度的步幅,滤波器大小和填充参数是否决定该层的输出形状,类似于在卷积期间如何确定?

The TF documentation has an output_shape parameter in tf.conv2d_transpose. Why is this needed? Don't the strides, filter size and padding parameters of the layer decide the output shape of that layer, similar to how it is decided during convolution?

推荐答案

已在 TF github 并收到答案:

output_shape是必需的,因为输出的形状不能 必须根据输入的形状进行计算,特别是如果 输出小于过滤器,我们使用有效填充 输入的图像为空.但是,这种退化的情况是 大多数时候都不重要,因此制作Python是合理的 包装器会自动计算output_shape(如果未设置).

output_shape is needed because the shape of the output can't necessarily be computed from the shape of the input, specifically if the output is smaller than the filter and we're using VALID padding so the input is an empty image. However, this degenerate case is unimportant most of the time, so it'd be reasonable to make the Python wrapper compute output_shape automatically if it isn't set.

读取整个线程很有意义.

It makes sense to read the whole thread.

如果采用以下符号output = oinput = ikernel = kstride = spadding = p,则输出的形状将为:

If you assume the following notation, output = o, input = i, kernel = k, stride = s, padding = p, the shape of the output will be:

这篇关于为什么在张量流反卷积过程中必须指定输出形状?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆