Tensorflow的不对称填充假设 [英] Tensorflow's asymmetric padding assumptions

查看:180
本文介绍了Tensorflow的不对称填充假设的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

为什么TensorFlow选择偏向于右下角的填充?

Why has TensorFlow chosen to prefer padding on the bottom right?

使用SAME填充,对我来说,在第一个实像素处启动内核的中心锚点是合乎逻辑的.由于使用了非对称填充,因此与其他一些框架存在差异.我确实了解不对称填充原则上是好的,因为否则会留下未使用的填充行/列.

With SAME padding, to me it would feel logical to start the kernel's center anchor at the first real pixel. Due to the use of asymmetric padding, this results in a discrepancy with some other frameworks. I do understand that asymmetric padding in principle is good because otherwise one would be left with an unused padding row/column.

如果TensorFlow优先考虑左侧和顶部的填充,它将进行与Caffe/cudnn/$frameworks相同的卷积和权重,并且无论填充如何,权重转换都是兼容的.

If TensorFlow would have given presedence to padding on the left and top, it would do convolutions and weights the same as Caffe/cudnn/$frameworks, and weight conversion would be compatible regardless of padding.

代码:

import numpy as np
import tensorflow as tf
import torch
import torch.nn as nn

tf.enable_eager_execution()

def conv1d_tf(data, kernel_weights, stride):
    filters = np.reshape(kernel_weights, [len(kernel_weights), 1, 1])
    out = tf.nn.conv1d(
        value=data,
        filters=filters,
        stride=stride,
        padding='SAME',
        data_format='NCW',
        )
    return out


def conv1d_pytorch(data, kernel_weights, stride):
    filters = np.reshape(kernel_weights, [1, 1, len(kernel_weights)])
    kernel_size = len(kernel_weights)
    size = data.shape[-1]
    def same_padding(size, kernel_size, stride, dilation):
        padding = ((size - 1) * (stride - 1) + dilation * (kernel_size - 1)) //2
        return padding
    padding = same_padding(size=size, kernel_size=kernel_size, stride=stride, dilation=0)
    conv = nn.Conv1d(
        in_channels=1,
        out_channels=1,
        kernel_size=kernel_size,
        stride=stride,
        bias=False,
        padding=padding,
        )
    conv.weight = torch.nn.Parameter(torch.from_numpy(filters))
    return conv(torch.from_numpy(data))


data = np.array([[[1, 2, 3, 4]]], dtype=np.float32)
kernel_weights = np.array([0, 1], dtype=np.float32)
stride = 2

out_tf = conv1d_tf(data=data, kernel_weights=kernel_weights, stride=stride)
out_pytorch = conv1d_pytorch(data=data, kernel_weights=kernel_weights, stride=stride)

print('TensorFlow: %s' % out_tf)
print('pyTorch: %s' % out_pytorch)

输出:

TensorFlow: tf.Tensor([[[2. 4.]]], shape=(1, 1, 2), dtype=float32)
pyTorch: tensor([[[1., 3.]]], grad_fn=<SqueezeBackward1>)

推荐答案

这是出于与以前(非公开)框架历史兼容的原因.不幸的是,定义不清晰,因为在不同库之间进行移植时,这是一个常见的绊脚石.

This is for historical compatibility reasons with previous (non-public) frameworks. It is unfortunate that the definitions aren't clearer, since it's a common stumbling block when porting between different libraries.

这篇关于Tensorflow的不对称填充假设的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆