具有不同大小图像的Tensorflow卷积神经网络 [英] Tensorflow Convolution Neural Network with different sized images

查看:191
本文介绍了具有不同大小图像的Tensorflow卷积神经网络的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试创建一个深的CNN,以对图像中的每个像素进行分类。我正在从下面的图像中复制架构,该图像取自



当前,我已经硬编码了我的模型以接受尺寸大小的图像32x32x7,但我想接受任何大小的输入。 我需要对代码进行哪些更改以接受可变大小的输入?

  x = tf .placeholder(tf.float32,shape = [None,32 * 32 * 7])
y_ = tf.placeholder(tf.float32,shape = [None,32 * 32 * 7,3])
...
DeConnv1 = tf.nn.conv3d_transpose(layer1,过滤器= w,output_shape = [1,32,32,7,1],步幅= [1,2,2,2,1], padding ='SAME')
...
final = tf.reshape(final,[1,32 * 32 * 7])
W_final = weight_variable([32 * 32 * 7, 32 * 32 * 7,3])
b_final = bias_variable([32 * 32 * 7,3])
final_conv = tf.tensordot(final,W_final,axes = [[1],[1 ]])+ b_final


解决方案



< h2>动态占位符

Tensorflow允许具有多个动态(也称为 None )尺寸在占位符中。引擎在构建图形时将无法确保正确性,因此客户端负责提供正确的输入,但是它提供了很大的灵活性。



所以我要从...

  x = tf.placeholder(tf.float32, shape = [None,N * M * P])
y_ = tf.placeholder(tf.float32,shape = [None,N * M * P,3])
...
x_image = tf.reshape(x,[-1,N,M,P,1])$ ​​b $ b

至...

 #几乎所有尺寸都是动态的
x_image = tf。 placeholder(tf.float32,shape = [None,None,None,None,1])$ ​​b $ b标签= tf.placeholder(tf.float32,shape = [None,None,3])

由于您打算将输入整形为5D,所以为什么不在 x_image <中使用5D / code>从头开始。此时, label 的第二维是任意的,但是我们 promise 张量流将与 x_image



反卷积中的动态形状



接下来,关于 tf.nn.conv3d_transpose 是它的输出形状可以是动态的。因此,代替此:

 #硬编码输出形状
DeConnv1 = tf。 nn.conv3d_transpose(layer1,w,output_shape = [1,32,32,7,1],...)

...您可以执行以下操作:

 #动态输出形状
DeConnv1 = tf.nn.conv3d_transpose(layer1,w,output_shape = tf.shape(x_image),...)

这样,转置卷积可以应用于任何图像,结果将采用实际已通过的 x_image 的形式在运行时。



请注意, x_image 的静态形状为(?,?,?,?, 1)



全卷积网络



最后的也是最重要的部分难题是要使整个网络进行卷积,其中还包括最后的密集层。密集层必须静态定义其尺寸,这会迫使整个神经网络固定输入图像的尺寸。



幸运的是,Springenberg等人描述了为简单而努力:全卷积网络 论文中,用CONV层替换FC层的方法。我将使用带3个 1x1x1 过滤器的卷积(另请参见这个问题):

  final_conv = conv3d_s1(final,weight_variable([1, 1,1,1,3])))
y = tf.reshape(final_conv,[-1,3])

如果我们确保 final 具有与 DeConnv1 (及其他)相同的尺寸,则它将'将使 y 符合我们想要的形状: [-1,N * M * P,3]



将它们组合在一起



您的网络相当大,但是所有反卷积基本上都遵循相同的模式,所以我已经将我的概念验证代码简化为一个反卷积。目的只是显示哪种网络能够处理任意大小的图像。最后说明:批次之间的图像尺寸可以有所不同,但是在同一批次内必须相同。



完整代码:

  sess = tf.InteractiveSession()

def conv3d_dilation(tempX,tempFilter) :
返回tf.layers.conv3d(tempX,filter = tempFilter,kernel_size = [3,3,1],步幅= 1,填充='SAME',膨胀率= 2)

def conv3d(tempX,tempW):
返回tf.nn.conv3d(tempX,tempW,strides = [1,2,2,2,1],padding ='SAME')

def conv3d_s1(tempX,tempW):
返回tf.nn.conv3d(tempX,tempW,strides = [1,1,1,1,1],padding ='SAME')

def weight_variable(shape):
initial = tf.truncated_normal(shape,stddev = 0.1)
return tf.Variable(initial)

def bias_variable(shape):
初始= tf.constant(0.1,shape = shape)
返回tf.Variable(initial)

def max_pool_3x3(x):
返回tf.nn.max_pool3d (x,ksize = [1,3,3,3,1],strid es = [1,2,2,2,1],padding ='SAME')

x_image = tf.placeholder(tf.float32,shape = [无,无,无,无,1 ])
标签= tf.placeholder(tf.float32,shape = [None,None,3])

W_conv1 = weight_variable([3,3,1,1,32])
h_conv1 = conv3d(x_image,W_conv1)
#第二次卷积
W_conv2 = weight_variable([3,3,4,32,64])
h_conv2 = conv3d_s1(h_conv1,W_conv2 )
#第三卷积路径1
W_conv3_A = weight_variable([1,1,1,64,64])
h_conv3_A = conv3d_s1(h_conv2,W_conv3_A)
#第三卷积路径2
W_conv3_B = weight_variable([1,1,1,64,64])
h_conv3_B = conv3d_s1(h_conv2,W_conv3_B)
#第四卷积路径1
W_conv4_A = weight_variable( [3,3,1,64,96])
h_conv4_A = conv3d_s1(h_conv3_A,W_conv4_A)
#第四卷积路径2
W_conv4_B = weight_variable([1,7,1,64, 64])
h_conv4_B = conv3d_s1(h_conv3_B,W_conv4_B)
#第五卷积路径2
W_conv5_B = weight_variable([1、7、1 64,64])
h_conv5_B = conv3d_s1(h_conv4_B,W_conv5_B)
#第六次卷积路径2
W_conv6_B = weight_variable([3,3,1,64,96])
h_conv6_B = conv3d_s1(h_conv5_B,W_conv6_B)
#串联
layer1 = tf.concat([h_conv4_A,h_conv6_B],4)
w = tf.Variable(tf.constant(1。,shape = [2,2,4,1,192]))
DeConnv1 = tf.nn.conv3d_transpose(layer1,filter = w,output_shape = tf.shape(x_image),步幅= [1,2,2,2 ,1],padding ='SAME')

final = DeConnv1
final_conv = conv3d_s1(final,weight_variable([1,1,1,1,3]))
y = tf.reshape(final_conv,[-1,3])
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels = label,logits = y))

print(' x_image:',x_image)
print('DeConnv1:',DeConnv1)
print('final_conv:',final_conv)

def try_image(N,M,P,B = 1):
batch_x = np.random.normal(size = [B,N,M,P,1])$ ​​b $ b batch_y = np.ones([B,N * M * P,3 ])/ 3.0

deconv_val,final_conv_val,损失= sess.run([DeConnv1,final_conv,cross_entropy],
feed_dict = {x_image:batch_x,标签:batch_y})
print(deconv_val.shape)
print(final_conv.shape)
print(loss)
print()

tf.global_variables_initializer()。run()
try_image(32,32,7)
try_image(16, 16,3)
try_image(16,16,3,2)


I am attempting to create a deep CNN that can classify each individual pixel in an image. I am replicating architecture from the image below taken from this paper. In the paper it is mentioned that deconvolutions are used so that any size of input is possible. This can be seen in the image below.

Github Repository

Currently, I have hard coded my model to accept images of size 32x32x7, but I would like to accept any size of input. What changes would I need to make to my code to accept variable sized input?

 x = tf.placeholder(tf.float32, shape=[None, 32*32*7])
 y_ = tf.placeholder(tf.float32, shape=[None, 32*32*7, 3])
 ...
 DeConnv1 = tf.nn.conv3d_transpose(layer1, filter = w, output_shape = [1,32,32,7,1], strides = [1,2,2,2,1], padding = 'SAME')
 ...
 final = tf.reshape(final, [1, 32*32*7])
 W_final = weight_variable([32*32*7,32*32*7,3])
 b_final = bias_variable([32*32*7,3])
 final_conv = tf.tensordot(final, W_final, axes=[[1], [1]]) + b_final

解决方案

Dynamic placeholders

Tensorflow allows to have multiple dynamic (a.k.a. None) dimensions in placeholders. The engine won't be able to ensure correctness while the graph is built, hence the client is responsible for feeding the correct input, but it provides a lot of flexibility.

So I'm going from...

x = tf.placeholder(tf.float32, shape=[None, N*M*P])
y_ = tf.placeholder(tf.float32, shape=[None, N*M*P, 3])
...
x_image = tf.reshape(x, [-1, N, M, P, 1])

to...

# Nearly all dimensions are dynamic
x_image = tf.placeholder(tf.float32, shape=[None, None, None, None, 1])
label = tf.placeholder(tf.float32, shape=[None, None, 3])

Since you intend to reshape the input to 5D anyway, so why don't use 5D in x_image right from the start. At this point, the second dimension of label is arbitrary, but we promise tensorflow that it will match with x_image.

Dynamic shapes in deconvolution

Next, the nice thing about tf.nn.conv3d_transpose is that its output shape can be dynamic. So instead of this:

# Hard-coded output shape
DeConnv1 = tf.nn.conv3d_transpose(layer1, w, output_shape=[1,32,32,7,1], ...)

... you can do this:

# Dynamic output shape
DeConnv1 = tf.nn.conv3d_transpose(layer1, w, output_shape=tf.shape(x_image), ...)

This way the transpose convolution can be applied to any image and the result will take the shape of x_image that was actually passed in at runtime.

Note that static shape of x_image is (?, ?, ?, ?, 1).

All-Convolutional network

Final and most important piece of the puzzle is to make the whole network convolutional, and that includes your final dense layer too. Dense layer must define its dimensions statically, which forces the whole neural network fix input image dimensions.

Luckily for us, Springenberg at al describe a way to replace an FC layer with a CONV layer in "Striving for Simplicity: The All Convolutional Net" paper. I'm going to use a convolution with 3 1x1x1 filters (see also this question):

final_conv = conv3d_s1(final, weight_variable([1, 1, 1, 1, 3]))
y = tf.reshape(final_conv, [-1, 3])

If we ensure that final has the same dimensions as DeConnv1 (and others), it'll make y right the shape we want: [-1, N * M * P, 3].

Combining it all together

Your network is pretty large, but all deconvolutions basically follow the same pattern, so I've simplified my proof-of-concept code to just one deconvolution. The goal is just to show what kind of network is able to handle images of arbitrary size. Final remark: image dimensions can vary between batches, but within one batch they have to be the same.

The full code:

sess = tf.InteractiveSession()

def conv3d_dilation(tempX, tempFilter):
  return tf.layers.conv3d(tempX, filters=tempFilter, kernel_size=[3, 3, 1], strides=1, padding='SAME', dilation_rate=2)

def conv3d(tempX, tempW):
  return tf.nn.conv3d(tempX, tempW, strides=[1, 2, 2, 2, 1], padding='SAME')

def conv3d_s1(tempX, tempW):
  return tf.nn.conv3d(tempX, tempW, strides=[1, 1, 1, 1, 1], padding='SAME')

def weight_variable(shape):
  initial = tf.truncated_normal(shape, stddev=0.1)
  return tf.Variable(initial)

def bias_variable(shape):
  initial = tf.constant(0.1, shape=shape)
  return tf.Variable(initial)

def max_pool_3x3(x):
  return tf.nn.max_pool3d(x, ksize=[1, 3, 3, 3, 1], strides=[1, 2, 2, 2, 1], padding='SAME')

x_image = tf.placeholder(tf.float32, shape=[None, None, None, None, 1])
label = tf.placeholder(tf.float32, shape=[None, None, 3])

W_conv1 = weight_variable([3, 3, 1, 1, 32])
h_conv1 = conv3d(x_image, W_conv1)
# second convolution
W_conv2 = weight_variable([3, 3, 4, 32, 64])
h_conv2 = conv3d_s1(h_conv1, W_conv2)
# third convolution path 1
W_conv3_A = weight_variable([1, 1, 1, 64, 64])
h_conv3_A = conv3d_s1(h_conv2, W_conv3_A)
# third convolution path 2
W_conv3_B = weight_variable([1, 1, 1, 64, 64])
h_conv3_B = conv3d_s1(h_conv2, W_conv3_B)
# fourth convolution path 1
W_conv4_A = weight_variable([3, 3, 1, 64, 96])
h_conv4_A = conv3d_s1(h_conv3_A, W_conv4_A)
# fourth convolution path 2
W_conv4_B = weight_variable([1, 7, 1, 64, 64])
h_conv4_B = conv3d_s1(h_conv3_B, W_conv4_B)
# fifth convolution path 2
W_conv5_B = weight_variable([1, 7, 1, 64, 64])
h_conv5_B = conv3d_s1(h_conv4_B, W_conv5_B)
# sixth convolution path 2
W_conv6_B = weight_variable([3, 3, 1, 64, 96])
h_conv6_B = conv3d_s1(h_conv5_B, W_conv6_B)
# concatenation
layer1 = tf.concat([h_conv4_A, h_conv6_B], 4)
w = tf.Variable(tf.constant(1., shape=[2, 2, 4, 1, 192]))
DeConnv1 = tf.nn.conv3d_transpose(layer1, filter=w, output_shape=tf.shape(x_image), strides=[1, 2, 2, 2, 1], padding='SAME')

final = DeConnv1
final_conv = conv3d_s1(final, weight_variable([1, 1, 1, 1, 3]))
y = tf.reshape(final_conv, [-1, 3])
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=label, logits=y))

print('x_image:', x_image)
print('DeConnv1:', DeConnv1)
print('final_conv:', final_conv)

def try_image(N, M, P, B=1):
  batch_x = np.random.normal(size=[B, N, M, P, 1])
  batch_y = np.ones([B, N * M * P, 3]) / 3.0

  deconv_val, final_conv_val, loss = sess.run([DeConnv1, final_conv, cross_entropy],
                                              feed_dict={x_image: batch_x, label: batch_y})
  print(deconv_val.shape)
  print(final_conv.shape)
  print(loss)
  print()

tf.global_variables_initializer().run()
try_image(32, 32, 7)
try_image(16, 16, 3)
try_image(16, 16, 3, 2)

这篇关于具有不同大小图像的Tensorflow卷积神经网络的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆