在 Pytorch 中计算欧几里得范数.. 难以理解实现 [英] Calculating Euclidian Norm in Pytorch.. Trouble understanding an implementation

查看:61
本文介绍了在 Pytorch 中计算欧几里得范数.. 难以理解实现的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我看到另一个 StackOverflow 线程谈论计算欧几里得范数的各种实现,但我无法理解特定实现为什么/如何工作.

代码可以在 MMD 指标的实现中找到:

让我们考虑最简单的情况.我们有两个样本,

样本a有两个向量[a00, a01][a10, a11].示例 b 相同.首先计算范数

n1, n2 = a.size(0), b.size(0) # 这里 n1 和 n2 的值都是 2norm1 = torch.sum(a**2,dim=1)norm2 = torch.sum(b**2,dim=1)

现在我们得到

接下来,我们有 norms_1.expand(n_1, n_2)norms_2.transpose(0, 1).expand(n_1, n_2)

注意 b 是转置的.两者之和给出norm

sample_1.mm(sample_2.t()),就是两个矩阵的相乘.

因此,操作后

distances_squared = norms - 2 * sample_1.mm(sample_2.t())

你得到

最后,最后一步是取矩阵中每个元素的平方根.

I've seen another StackOverflow thread talking about the various implementations for calculating the Euclidian norm and I'm having trouble seeing why/how a particular implementation works.

The code is found in an implementation of the MMD metric: https://github.com/josipd/torch-two-sample/blob/master/torch_two_sample/statistics_diff.py

Here is some beginning boilerplate:

import torch
sample_1, sample_2 = torch.ones((10,2)), torch.zeros((10,2))

Then the next part is where we pick up from the code above.. I'm unsure why the samples are being concatenated together..

sample_12 = torch.cat((sample_1, sample_2), 0)
distances = pdist(sample_12, sample_12, norm=2)

and are then passed to the pdist function:

def pdist(sample_1, sample_2, norm=2, eps=1e-5):
    r"""Compute the matrix of all squared pairwise distances.
    Arguments
    ---------
    sample_1 : torch.Tensor or Variable
        The first sample, should be of shape ``(n_1, d)``.
    sample_2 : torch.Tensor or Variable
        The second sample, should be of shape ``(n_2, d)``.
    norm : float
        The l_p norm to be used.
    Returns
    -------
    torch.Tensor or Variable
        Matrix of shape (n_1, n_2). The [i, j]-th entry is equal to
        ``|| sample_1[i, :] - sample_2[j, :] ||_p``."""

here we get to the meat of the calculation

    n_1, n_2 = sample_1.size(0), sample_2.size(0)
    norm = float(norm)
    if norm == 2.:
        norms_1 = torch.sum(sample_1**2, dim=1, keepdim=True)
        norms_2 = torch.sum(sample_2**2, dim=1, keepdim=True)
        norms = (norms_1.expand(n_1, n_2) +
             norms_2.transpose(0, 1).expand(n_1, n_2))
        distances_squared = norms - 2 * sample_1.mm(sample_2.t())
        return torch.sqrt(eps + torch.abs(distances_squared))

I am at a loss for why the euclidian norm would be calculated this way. Any insight would be greatly appreciated

解决方案

Let's walk through this block of code step by step. The definition of Euclidean distance, i.e., L2 norm is

Let's consider the simplest case. We have two samples,

Sample a has two vectors [a00, a01] and [a10, a11]. Same for sample b. Let first calculate the norm

n1, n2 = a.size(0), b.size(0)  # here both n1 and n2 have the value 2
norm1 = torch.sum(a**2, dim=1)
norm2 = torch.sum(b**2, dim=1)

Now we get

Next, we have norms_1.expand(n_1, n_2) and norms_2.transpose(0, 1).expand(n_1, n_2)

Note that b is transposed. The sum of the two gives norm

sample_1.mm(sample_2.t()), that's the multiplication of the two matrix.

Therefore, after the operation

distances_squared = norms - 2 * sample_1.mm(sample_2.t())

you get

In the end, the last step is taking the square root of every element in the matrix.

这篇关于在 Pytorch 中计算欧几里得范数.. 难以理解实现的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆