更改为TensorFlow Cifar10示例的低精度 [英] Low accuracy with change to TensorFlow Cifar10 example

查看:1347
本文介绍了更改为TensorFlow Cifar10示例的低精度的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图修改TensorFlow中Cifar10提供的网络结构。通常,我在第一个卷积层(conv1)之后添加了另一个卷积层(conv12)。无论我如何设置过滤器(我尝试所有1x1,3x3,5x5)和是否使用重量衰减,有一个新的层将降低到低于10%的精度。这相当于Cifar10中的随机猜测,因为有10个类。

I am trying to modify the network structure provided by Cifar10 in TensorFlow. Typically, I added another convolution layer (conv12) after the first convolution layer (conv1). No matter how I set the filter (I tried all 1x1, 3x3, 5x5) and whether using weight decay or not, having a new layer will decrease the accuracy to below than 10%. This is equivalent to a random guess in Cifar10 since there are 10 classes.

代码结构如下,我不修改cifar的任何其他部分,除了设置输入图像的大小为48x48(而不是24x24)。我想输入的大小不要紧。

The code structure is as following, I don't modify any other part of the cifar except setting the size of input image to be 48x48 (instead of 24x24). I guess the input size should not matter.

请注意,conv12是一个深度卷积层,因为我想在conv1层之后添加一个线性层,以便最小化更改为原始代码。这样做,我预计的准确性应该类似于原始版本,但它减少到10%左右。 (我也试过一个正常的卷积层,但它也不工作。)

Note that the conv12 is a depthwise convolution layer because I want to add just a linear layer after the conv1 layer in order to minimize the change to the original code. Doing that I expected that the accuracy should be similar to the original version, but it decreases to around 10%. (I also tried a normal convolution layer but it didn't work also.)

  with tf.variable_scope('conv1') as scope:
    kernel1 = _variable_with_weight_decay('weights', shape=[5, 5, 3, 64],
                                         stddev=1e-4, wd=0.0)
    conv_1 = tf.nn.conv2d(images, kernel1, [1, 1, 1, 1], padding='SAME')
    biases1 = _variable_on_cpu('biases', [64], tf.constant_initializer(0.0))
    bias1 = tf.nn.bias_add(conv_1, biases1)
    conv1 = tf.nn.relu(bias1, name=scope.name)
    _activation_summary(conv1)


  with tf.variable_scope('conv12') as scope:
    kernel12 = _variable_with_weight_decay('weights', shape=[1, 1, 64, 1],
                                         stddev=1e-4, wd=0.0)
    #conv_12 = tf.nn.conv2d(conv1, kernel12, [1, 1, 1, 1], padding='SAME')
    conv_12 = tf.nn.depthwise_conv2d(conv1, kernel12, [1, 1, 1, 1], padding='SAME')
    biases12 = _variable_on_cpu('biases', [64], tf.constant_initializer(0.0))
    bias12 = tf.nn.bias_add(conv_12, biases12)        
    conv12 = tf.nn.relu(bias12)
    _activation_summary(conv12)

  pool1 = tf.nn.max_pool(conv12, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1],
                     padding='SAME', name='pool1')
.....

有人可以告诉我代码有什么问题吗?

Could someone please tell me what wrong with the code?

推荐答案

您的第二个卷积:

kernel12 = _variable_with_weight_decay('weights', shape=[1, 1, 64, 1]

正在对上一层的depth-64输出进行挤压下降到深度-1输出。这似乎不符合您跟随此后的任何代码(如果它 conv2 tensorflow / blob / master / tensorflow / models / image / cifar10 / cifar10.pyrel =nofollow>来自TensorFlow的cifar示例,那么它绝对不会工作,因为一个期望深度-64 input。)

is taking the depth-64 output of the previous layer and squeezing it down to a depth-1 output. That doesn't seem like it will match with whichever code you have following this (if it's conv2 from the cifar example from TensorFlow, then it definitely isn't going to work well, because that one expects a depth-64 input.)

也许你真的想要 shape = [1,1,64,64] 只是在您的模型中添加一个额外的inception-style1x1卷积层?

Perhaps you really wanted shape=[1, 1, 64, 64], which would simply add an extra "inception-style" 1x1 convolutional layer into your model?

这篇关于更改为TensorFlow Cifar10示例的低精度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆