在 pytorch 中执行卷积(非互相关) [英] Performing Convolution (NOT cross-correlation) in pytorch

查看:29
本文介绍了在 pytorch 中执行卷积(非互相关)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有 一个网络,我想在 pytorch 中实现它,我似乎无法弄清楚如何实现纯"卷积.在 tensorflow 中,它可以这样完成:

I have a network that I am trying to implement in pytorch, and I cannot seem to figure out how to implement "pure" convolution. In tensorflow it could be accomplished like this:

def conv2d_flipkernel(x, k, name=None):
    return tf.nn.conv2d(x, flipkernel(k), name=name,
                        strides=(1, 1, 1, 1), padding='SAME')

flipkernel 函数是:

def flipkernel(kern):
      return kern[(slice(None, None, -1),) * 2 + (slice(None), slice(None))]

如何在 pytorch 中做类似的事情?

How can something similar be done in pytorch?

推荐答案

TLDR 使用函数工具箱中的卷积,torch.nn.fuctional.conv2d,而不是 torch.nn.Conv2d,并围绕垂直和水平轴翻转过滤器.

TLDR Use the convolution from the functional toolbox, torch.nn.fuctional.conv2d, not torch.nn.Conv2d, and flip your filter around the vertical and horizontal axis.

torch.nn.Conv2d 是网络的卷积层.因为权重是学习的,所以它是否使用互相关实现并不重要,因为网络将简单地学习内核的镜像版本(感谢@etarion 的澄清).

torch.nn.Conv2d is a convolutional layer for a network. Because weights are learned, it does not matter if it is implemented using cross-correlation, because the network will simply learn a mirrored version of the kernel (Thanks @etarion for this clarification).

torch.nn.fuctional.conv2d 使用作为参数提供的输入和权重执行卷积,类似于示例中的 tensorflow 函数.我写了一个简单的测试来确定是否像 tensorflow 函数一样,它实际上是在执行互相关,是否需要翻转滤波器以获得正确的卷积结果.

torch.nn.fuctional.conv2d performs convolution with the inputs and weights provided as arguments, similar to the tensorflow function in your example. I wrote a simple test to determine whether, like the tensorflow function, it is actually performing cross-correlation and it is necessary to flip the filter for correct convolutional results.

import torch
import torch.nn.functional as F
import torch.autograd as autograd
import numpy as np

#A vertical edge detection filter. 
#Because this filter is not symmetric, for correct convolution the filter must be flipped before element-wise multiplication
filters = autograd.Variable(torch.FloatTensor([[[[-1, 1]]]]))

#A test image of a square
inputs = autograd.Variable(torch.FloatTensor([[[[0,0,0,0,0,0,0], [0, 0, 1, 1, 1, 0, 0], 
                                             [0, 0, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 0, 0],
                                            [0,0,0,0,0,0,0]]]]))
print(F.conv2d(inputs, filters))

这个输出

Variable containing:
(0 ,0 ,.,.) = 
  0  0  0  0  0  0
  0  1  0  0 -1  0
  0  1  0  0 -1  0
  0  1  0  0 -1  0
  0  0  0  0  0  0
[torch.FloatTensor of size 1x1x5x6]

此输出是互相关的结果.因此,我们需要翻转过滤器

This output is the result for cross-correlation. Therefore, we need to flip the filter

def flip_tensor(t):
    flipped = t.numpy().copy()

    for i in range(len(filters.size())):
        flipped = np.flip(flipped,i) #Reverse given tensor on dimention i
    return torch.from_numpy(flipped.copy())

print(F.conv2d(inputs, autograd.Variable(flip_tensor(filters.data))))

新的输出是卷积的正确结果.

The new output is the correct result for convolution.

Variable containing:
(0 ,0 ,.,.) = 
  0  0  0  0  0  0
  0 -1  0  0  1  0
  0 -1  0  0  1  0
  0 -1  0  0  1  0
  0  0  0  0  0  0
[torch.FloatTensor of size 1x1x5x6] 

这篇关于在 pytorch 中执行卷积(非互相关)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆