如何堆叠Tensorflow的conv2d_transpose()的多层 [英] How to stack multiple layers of conv2d_transpose() of Tensorflow

查看:54
本文介绍了如何堆叠Tensorflow的conv2d_transpose()的多层的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试堆叠2层 tf.nn.conv2d_transpose()以对张量进行上采样.它在前馈期间工作正常,但在向后传播期间出现错误: ValueError:广播的形状不兼容:(8、256、256、24)和(8、100、100、24).

I'm trying to stack 2 layers of tf.nn.conv2d_transpose() to up-sample a tensor. It works fine during feed forward, but I get an error during backward propagation: ValueError: Incompatible shapes for broadcasting: (8, 256, 256, 24) and (8, 100, 100, 24).

基本上,我只是将第一个 conv2d_transpose 的输出设置为第二个的输入:

Basically, I've just set the output of the first conv2d_transpose as the input of the second one:

convt_1 = tf.nn.conv2d_transpose(...)
convt_2 = tf.nn.conv2d_transpose(conv_1)

仅使用一个 conv2d_transpose ,一切正常.仅当多个 conv2d_transpose 堆叠在一起时,才会发生该错误.

Using just one conv2d_transpose, everything works fine. The error only occurs if multiple conv2d_transpose are stacked together.

我不确定实现多层 conv2d_transpose 的正确方法.任何有关如何解决此问题的建议将不胜感激.

I'm not sure of the proper way of implementing multiple layer of conv2d_transpose. Any advice on how to go about this would be very much appreciated.

这是一个复制错误的小代码:

Here's a small code that replicates the error:

import numpy as np
import tensorflow as tf

IMAGE_HEIGHT = 256
IMAGE_WIDTH = 256
CHANNELS = 1

batch_size = 8
num_labels = 2

in_data = tf.placeholder(tf.float32, shape=(batch_size, IMAGE_HEIGHT, IMAGE_WIDTH, CHANNELS))
labels = tf.placeholder(tf.int32, shape=(batch_size, IMAGE_HEIGHT, IMAGE_WIDTH, 1))

# Variables
w0 = tf.Variable(tf.truncated_normal([3, 3, CHANNELS, 32]))
b0 = tf.Variable(tf.zeros([32]))

# Down sample
conv_0 = tf.nn.relu(tf.nn.conv2d(in_data, w0, [1, 2, 2, 1], padding='SAME') + b0)
print("Convolution 0:", conv_0)


# Up sample 1. Upscale to 100 x 100 x 24
wt1 = tf.Variable(tf.truncated_normal([3, 3, 24, 32]))
convt_1 = tf.nn.sigmoid(
          tf.nn.conv2d_transpose(conv_0, 
                                 filter=wt1, 
                                 output_shape=[batch_size, 100, 100, 24], 
                                 strides=[1, 1, 1, 1]))
print("Deconvolution 1:", convt_1)


# Up sample 2. Upscale to 256 x 256 x 2
wt2 = tf.Variable(tf.truncated_normal([3, 3, 2, 24]))
convt_2 = tf.nn.sigmoid(
          tf.nn.conv2d_transpose(convt_1, 
                                 filter=wt2, 
                                 output_shape=[batch_size, IMAGE_HEIGHT, IMAGE_WIDTH, 2], 
                                 strides=[1, 1, 1, 1]))
print("Deconvolution 2:", convt_2)

# Loss computation
logits = tf.reshape(convt_2, [-1, num_labels])
reshaped_labels = tf.reshape(labels, [-1])
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits, reshaped_labels)
loss = tf.reduce_mean(cross_entropy)

optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)

推荐答案

我想您需要在conv2d_transpose中更改跨步"参数. conv2d_transpos 类似于 conv2d ,但是输入和输出是相反的.

I guess you need to change your 'stride' paramter in conv2d_transpose. conv2d_transpos is like conv2d but input and output are reversed.

对于 conv2d stride 和输入形状将决定输出形状.对于 conv2d_transpose stride 和输出形状将决定输入形状.现在您的跨步为[1 1 1 1],这意味着 conv2d_transpose 的输出和输入大致相同(忽略边界效应).

For conv2d, the stride and input shape will decide the output shape. For conv2d_transpose, the stride and output shape will decide the input shape. Now your stride is [1 1 1 1], which means output and input of conv2d_transpose is about same (ignoring boundary effect).

对于输入H = W = 100, stride = [1 2 2 1] conv2d_tranpose 的输出应为200.(与 conv2d ),如果您将 padding 设置为SAME.简而言之,输入,输出和步幅必须兼容.

For input H = W = 100, stride = [1 2 2 1], the output of conv2d_tranpose should be 200. (reversed of conv2d), if you set the padding to SAME. In short, input, output and stride need to be compatible.

这篇关于如何堆叠Tensorflow的conv2d_transpose()的多层的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆