keras和scipy之间的不同2D卷积结果 [英] Different 2D convolution results between keras and scipy

查看:194
本文介绍了keras和scipy之间的不同2D卷积结果的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当尝试调试我的神经网络时,我发现一些结果难以理解.我尝试使用scipy(1.3.0)脱机进行一些计算,但结果与使用tensorflow(1.14.0)的keras(2.3.1)的结果不同.这是一个最小的可重现示例:

I found some results difficult to understand when trying to debug my neural network. I tried to do some computations offline using scipy (1.3.0), and I am not having the same results as with keras (2.3.1) with a tensorflow (1.14.0) backend. Here is a minimal reproducible example:

from keras.layers import Conv2D, Input
from keras.models import Model
import numpy as np
from scipy.signal import convolve2d

image = np.array([[-1.16551484e-04, -1.88735046e-03, -7.90571701e-03,
        -1.52302440e-02, -1.55315138e-02, -8.40757508e-03,
        -2.12123734e-03, -1.49851941e-04],
       [-1.88735046e-03, -3.05623915e-02, -1.28019482e-01,
        -2.46627569e-01, -2.51506150e-01, -1.36146188e-01,
        -3.43497843e-02, -2.42659380e-03],
       [-7.90571701e-03, -1.28019482e-01, -5.06409585e-01,
        -6.69258237e-01, -6.63918257e-01, -5.31925797e-01,
        -1.43884048e-01, -1.01644937e-02],
       [-1.52302440e-02, -2.46627569e-01, -6.69258296e-01,
         2.44587708e+00,  2.72079444e+00, -6.30891442e-01,
        -2.77190477e-01, -1.95817426e-02],
       [-1.55315138e-02, -2.51506120e-01, -6.63918316e-01,
         2.72079420e+00,  3.01719952e+00, -6.19484246e-01,
        -2.82673597e-01, -1.99690927e-02],
       [-8.40757508e-03, -1.36146188e-01, -5.31925797e-01,
        -6.30891442e-01, -6.19484186e-01, -5.57167232e-01,
        -1.53017864e-01, -1.08097391e-02],
       [-2.12123734e-03, -3.43497805e-02, -1.43884048e-01,
        -2.77190447e-01, -2.82673597e-01, -1.53017864e-01,
        -3.86065207e-02, -2.72730505e-03],
       [-1.49851941e-04, -2.42659380e-03, -1.01644937e-02,
        -1.95817426e-02, -1.99690927e-02, -1.08097391e-02,
        -2.72730505e-03, -1.92666746e-04]], dtype='float32')

kernel = np.array([[ 0.04277903 ,  0.5318366  ,  0.025291916],
       [ 0.5756132  , -0.493123   ,  0.116359994],
       [ 0.10616145 , -0.319581   , -0.115053006]], dtype='float32')

print('Mean of original image', np.mean(image))

## Scipy result

res_scipy = convolve2d(image, kernel.T, mode='same')

print('Mean of convolution with scipy', np.mean(res_scipy))

## Keras result

def init(shape, dtype=None):
    return kernel[..., None, None]
im = Input((None, None, 1))
im_conv = Conv2D(1, 3, padding='same', use_bias=False, kernel_initializer=init)(im)
model = Model(im, im_conv)

model.compile(loss='mse', optimizer='adam')

res_keras = model.predict_on_batch(image[None, ..., None])

print('Mean of convolution with keras', np.mean(res_keras))

当可视化结果时,我发现它们实际上是对称的(中心对称的点对称以一点点偏移). .

When visualizing the results, I found that they are actually symmetric (point symmetry around the center modulo a little shift). .

我尝试了一些经验性的工作,例如转置内核,但是它并没有改变任何东西.

I tried something empirical like transposing the kernel, but it didn't change anything.

编辑 感谢@ kaya3评论,我意识到将内核旋转180度可以解决问题.但是,我仍然不明白为什么我需要这样做以获得相同的结果.

EDIT Thanks to @kaya3 comment, I realized that rotating the kernel by 180 degrees did the trick. However, I still don't understand why I need to do this to get the same results.

推荐答案

在神经网络(和图像处理)中通常被称为卷积的东西并不是实施correlate2d :

What is usually called convolution in neural networks (and image processing) is not exactly the mathematical concept of convolution, which is what convolve2d implements, but the similar one of correlation, which is implemented by correlate2d:

res_scipy = correlate2d(image, kernel.T, mode='same')

这篇关于keras和scipy之间的不同2D卷积结果的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆