如何在 tensorflow 上可视化学习到的过滤器 [英] How to visualize learned filters on tensorflow

查看:30
本文介绍了如何在 tensorflow 上可视化学习到的过滤器的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

与 Caffe 框架类似,可以在 CNN 训练期间观察学习到的过滤器,并生成与输入图像的卷积,我想知道是否可以用 TensorFlow 做同样的事情?

可以在此链接中查看 Caffe 示例:

<块引用>

http://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb

感谢您的帮助!

解决方案

要查看 Tensorboard 中的几个 conv1 过滤器,您可以使用此代码(适用于 cifar10)

# 这应该是 cifar10.py 文件中 inference(images) 函数的一部分# conv1以 tf.variable_scope('conv1') 作为范围:kernel = _variable_with_weight_decay('权重', shape=[5, 5, 3, 64],stddev=1e-4, wd=0.0)conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME')偏差 = _variable_on_cpu('偏差', [64], tf.constant_initializer(0.0))偏差 = tf.nn.bias_add(conv, 偏差)conv1 = tf.nn.relu(bias, name=scope.name)_activation_summary(conv1)使用 tf.variable_scope('visualization'):# 将权重缩放为 [0 1],类型仍然是浮点数x_min = tf.reduce_min(内核)x_max = tf.reduce_max(内核)kernel_0_to_1 = (kernel - x_min)/(x_max - x_min)# to tf.image_summary 格式[batch_size, height, width, channels]kernel_transposed = tf.transpose (kernel_0_to_1, [3, 0, 1, 2])# 这将显示 conv1 中 64 个中的随机 3 个过滤器tf.image_summary('conv1/filters', kernel_transposed, max_images=3)

我还编写了一个简单的 gist 以在网格中显示所有 64 个 conv1 过滤器.>

Similarly to the Caffe framework, where it is possible to watch the learned filters during CNNs training and it's resulting convolution with input images, I wonder if is it possible to do the same with TensorFlow?

A Caffe example can be viewed in this link:

http://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb

Grateful for your help!

解决方案

To see just a few conv1 filters in Tensorboard, you can use this code (it works for cifar10)

# this should be a part of the inference(images) function in cifar10.py file

# conv1
with tf.variable_scope('conv1') as scope:
  kernel = _variable_with_weight_decay('weights', shape=[5, 5, 3, 64],
                                       stddev=1e-4, wd=0.0)
  conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME')
  biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.0))
  bias = tf.nn.bias_add(conv, biases)
  conv1 = tf.nn.relu(bias, name=scope.name)
  _activation_summary(conv1)

  with tf.variable_scope('visualization'):
    # scale weights to [0 1], type is still float
    x_min = tf.reduce_min(kernel)
    x_max = tf.reduce_max(kernel)
    kernel_0_to_1 = (kernel - x_min) / (x_max - x_min)

    # to tf.image_summary format [batch_size, height, width, channels]
    kernel_transposed = tf.transpose (kernel_0_to_1, [3, 0, 1, 2])

    # this will display random 3 filters from the 64 in conv1
    tf.image_summary('conv1/filters', kernel_transposed, max_images=3)

I also wrote a simple gist to display all 64 conv1 filters in a grid.

这篇关于如何在 tensorflow 上可视化学习到的过滤器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆