不同维度的python卷积 [英] python convolution with different dimension

查看:285
本文介绍了不同维度的python卷积的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在Python中实现卷积神经网络.
但是,当我使用signal.convolve或np.convolve时,它无法在X,Y(X为3d,Y为2d)上进行卷积. X是训练小批量生产. Y是过滤器. 我不想为每个训练向量做for循环:

I'm trying to implement convolutional neural network in Python.
However, when I use signal.convolve or np.convolve, it can not do convolution on X, Y(X is 3d, Y is 2d). X are training minibatches. Y are filters. I don't want to do for loop for every training vector like:

for i in xrange(X.shape[2]):
    result = signal.convolve(X[:,:,i], Y, 'valid')
    ....

那么,我可以使用任何函数来高效地进行卷积吗?

So, is there any function I can use to do convolution efficiently?

推荐答案

Scipy实现了标准的N维卷积,因此要卷积的矩阵和内核都是N维的.

Scipy implements standard N-dimensional convolutions, so that the matrix to be convolved and the kernel are both N-dimensional.

一种快速的解决方法是为Y添加一个额外的尺寸,以使Y为3维:

A quick fix would be to add an extra dimension to Y so that Y is 3-Dimensional:

result = signal.convolve(X, Y[..., None], 'valid')

我在这里假设最后一个轴对应于图像索引,如您的示例[width, height, image_idx](或[height, width, image_idx]).如果相反,并且图像在第一个轴上索引(这在C顺序数组中更常见),则应将Y[..., None]替换为Y[None, ...].

I'm assuming here that the last axis corresponds to the image index as in your example [width, height, image_idx] (or [height, width, image_idx]). If it is the other way around and the images are indexed in the first axis (as it is more common in C-ordering arrays) you should replace Y[..., None] with Y[None, ...].

线Y[..., None]将在Y上添加一条额外的轴,使其成为3维[kernel_width, kernel_height, 1],从而将其转换为有效的3维卷积核.

The line Y[..., None] will add an extra axis to Y, making it 3-dimensional [kernel_width, kernel_height, 1] and thus, converting it to a valid 3-Dimensional convolution kernel.

注意:假设您所有输入的迷你批处理都具有相同的width x height,这在CNN中是标准的.

NOTE: This assumes that all your input mini-batches have the same width x height, which is standard in CNN's.

@Divakar建议的一些时间安排.

Some timings as @Divakar suggested.

测试框架的设置如下:

def test(S, N, K):
    """ S: image size, N: num images, K: kernel size"""
    a = np.random.randn(S, S, N)
    b = np.random.randn(K, K)
    valid = [slice(K//2, -K//2+1), slice(K//2, -K//2+1)]

    %timeit signal.convolve(a, b[..., None], 'valid')
    %timeit signal.fftconvolve(a, b[..., None], 'valid')
    %timeit ndimage.convolve(a, b[..., None])[valid]

查找用于不同配置的波纹管测试:

Find bellow tests for different configurations:

  • 更改图像尺寸S:

>>> test(100, 50, 11) # 100x100 images
1 loop, best of 3: 909 ms per loop
10 loops, best of 3: 116 ms per loop
10 loops, best of 3: 54.9 ms per loop

>>> test(1000, 50, 11) # 1000x1000 images
1 loop, best of 3: 1min 51s per loop
1 loop, best of 3: 16.5 s per loop
1 loop, best of 3: 5.66 s per loop

  • 不同数量的图像N:

    >>> test(100, 5, 11) # 5 images
    10 loops, best of 3: 90.7 ms per loop
    10 loops, best of 3: 26.7 ms per loop
    100 loops, best of 3: 5.7 ms per loop
    
    >>> test(100, 500, 11) # 500 images
    1 loop, best of 3: 9.75 s per loop
    1 loop, best of 3: 888 ms per loop
    1 loop, best of 3: 727 ms per loop
    

  • 变化的内核大小K:

    >>> test(100, 50, 5) # 5x5 kernels
    1 loop, best of 3: 217 ms per loop
    10 loops, best of 3: 100 ms per loop
    100 loops, best of 3: 11.4 ms per loop
    
    >>> test(100, 50, 31) # 31x31 kernels
    1 loop, best of 3: 4.39 s per loop
    1 loop, best of 3: 220 ms per loop
    1 loop, best of 3: 560 ms per loop
    

  • 因此,简而言之,ndimage.convolve总是更快,除非内核大小很大(如上一次测试中的K = 31).

    So, in short, ndimage.convolve is always faster, except when the kernel size is very large (as K = 31 in the last test).

    这篇关于不同维度的python卷积的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆