全卷积网络训练图像大小 [英] Fully Convolutional Network Training Image Size

查看:25
本文介绍了全卷积网络训练图像大小的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用 TensorFlow 复制用于语义分割的全卷积网络 (FCN) 的结果.

I'm trying to replicate the results of Fully Convolutional Network (FCN) for Semantic Segmentation using TensorFlow.

我一直坚持将训练图像输入计算图中.全卷积网络使用 VOC PASCAL 数据集进行训练.然而,数据集中的训练图像大小不一.

I'm stuck on feeding training images into the computation graph. The fully convolutional network used VOC PASCAL dataset for training. However, the training images in the dataset are of varied sizes.

我只是想问他们是否对训练图像进行了预处理以使其具有相同的大小以及他们如何预处理图像.如果不是,他们是否只是将不同大小的图像批量输入 FCN?是否可以将一批不同大小的图像输入 TensorFlow 中的计算图中?是否可以使用队列输入而不是占位符来做到这一点?

I just want to ask if they preprocessed the training images to make them have the same size and how they preprocessed the images. If not, did they just feed batches of images of different sizes into the FCN? Is it possible to feed images of different sizes in one batch into a computation graph in TensorFlow? Is it possible to do that using queue input rather than placeholder?

推荐答案

无法将不同大小的图像输入到单个输入批次中.每个批次可以有未定义数量的样本(通常是批次大小,下面用 None 标注)但每个样本必须具有相同的维度.

It's not possible to feed images of different size into a single input batch. Every batch can have an undefined number of samples (that's the batch size usually, below noted with None) but every sample must have the same dimensions.

当你训练一个全卷积网络时,你必须像训练一个在最后有全连接层的网络一样训练它.因此,输入批次中的每个输入图像必须具有相同的宽度、高度和深度.调整它们的大小.

When you train a fully convolutional network you have to train it like a network with fully connected layers at the end. So, every input image in the input batch must have the same widht, height and depth. Resize them.

唯一的区别是,虽然全连接层为输入批次中的每个样本输出单个输出向量(形状 [None, num_classes]),而全卷积层则输出类别的概率图.

The only difference is that while fully connected layers output a single output vector for every sample in the input batch (shape [None, num_classes]) the fully convolutional outputs a probability map of classes.

在训练过程中,当输入图像尺寸等于网络输入尺寸时,输出将是一个形状为[None, 1, 1, num_classes]的概率图.

During train, when the input images dimensions are equals to the network input dimensions, the output will be a probability map with shape [None, 1, 1, num_classes].

您可以使用 tf.squeeze 然后像使用全连接网络一样计算损失和准确度.

You can remove the dimensions of size 1 from the output tensor using tf.squeeze and then calculate the loss and accuracy just like you do with a fully connected network.

在测试时,当你输入维度大于输入的网络图像时,输出将是一个大小为[None, n, n, num_classes]的概率图.

At test time, when you feed the network images with dimensions greater than the input, the output will be a probability map with size [None, n, n, num_classes].

这篇关于全卷积网络训练图像大小的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆