Tensorflow卷积和numpy卷积之间的区别 [英] Difference between Tensorflow convolution and numpy convolution

查看:221
本文介绍了Tensorflow卷积和numpy卷积之间的区别的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

import numpy as np
import tensorflow as tf

X_node    = tf.placeholder('float',[1,10,1])
filter_tf = tf.Variable( tf.truncated_normal([3,1,1],stddev=0.1) )

Xconv_tf_tensor = tf.nn.conv1d(X_node, filter_tf,1,'SAME')

X = np.random.normal(0,1,[1,10,1])
with tf.Session() as sess:
    tf.global_variables_initializer().run()
    feed_dict = {X_node: X}
    filter_np = filter_tf.eval()
    Xconv_tf = sess.run(Xconv_tf_tensor,feed_dict)
    Xconv_np = np.convolve(X[0,:,0],filter_np[:,0,0],'SAME')

我正在尝试查看Tensorflow的卷积结果,以检查其行为是否符合我的预期. 当我运行numpy卷积并将其与Tensorflow卷积进行比较时,答案是不同的. 上面的代码是我运行测试的方式. 我希望Xconv_tfXconv_np相等.

I am trying to see the results of convolutions from Tensorflow to check if it is behaving as I intended. When I run the numpy convolution and compare it to the Tensorflow convolution, the answer is different. The above code is how I ran the test. I was hoping the Xconv_tf and Xconv_np would be equal.

我的最终目标是在具有1维滤镜的矩阵上进行2D卷积运算,并在具有相同滤镜的每一行上进行1d卷积运算.为了使这项工作有效(基本上是在行上进行一维卷积的循环),我需要弄清楚为什么np.convolvetf.conv1d给我不同的答案.

My final goal is the run 2D convolution on a matrix with a 1 dimensional filter that runs 1d-convolution on each row with the same filter. In order to make this work (which will be basically a loop of 1d convolution over the rows) I need to figure out why my np.convolve and tf.conv1d give me different answers.

推荐答案

您看到的问题是因为TF并未真正计算卷积.如果您要看一下卷积实际上是做什么的解释(请检查 Visual卷积的说明),您会看到第二个函数被翻转了:

The problem that you see is because TF does not really calculate the convolution. If you will take a look at the explanation of what convolution actually does (check for Visual explanations of convolution), you will see that the second function is flipped:

  1. 用虚拟变量表示每个函数
  2. 反映功能之一(这是翻转)
  3. ...其他一些我不会在这里复制的东西.

TF会执行除该翻转以外的所有操作.因此,您所需要做的就是以TF或numpy的形式翻转内核.翻转1d情况仅是内核以相反的顺序进行,对于2d,则需要翻转两个轴(将内核旋转2次).

TF does everything except of that flip. So all you need to do is to flip the kernel either in TF or in numpy. Flipping for 1d case is just kernel in a reverse order, for 2d you will need to flip both axis (rotate the kernel 2 times).

import tensorflow as tf
import numpy as np

I = [1, 0, 2, 3, 0, 1, 1]
K = [2, 1, 3]

i = tf.constant(I, dtype=tf.float32, name='i')
k = tf.constant(K, dtype=tf.float32, name='k')

data   = tf.reshape(i, [1, int(i.shape[0]), 1], name='data')
kernel = tf.reshape(k, [int(k.shape[0]), 1, 1], name='kernel')

res = tf.squeeze(tf.nn.conv1d(data, kernel, 1, 'VALID'))
with tf.Session() as sess:
    print sess.run(res)
    print np.convolve(I, K[::-1], 'VALID')

这篇关于Tensorflow卷积和numpy卷积之间的区别的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆