Autograd.grad() 用于 pytorch 中的 Tensor [英] Autograd.grad() for Tensor in pytorch
问题描述
我想计算网络中两个张量之间的梯度.输入 X 张量(批量大小 x m)通过一组卷积层发送给我,然后输出 Y 张量(批量大小 x n).
I want to compute the gradient between two tensors in a net. The input X tensor (batch size x m) is sent through a set of convolutional layers which give me back and output Y tensor(batch size x n).
我正在创建一个新的损失,我想知道 Y w.r.t. 的梯度.X. tensorflow 中的一些东西:
I’m creating a new loss and I would like to know the gradient of Y w.r.t. X. Something that in tensorflow would be like:
tf.gradients(ys=Y, xs=X)
不幸的是,我一直在使用 torch.autograd.grad()
进行测试,但我不知道如何去做.我收到如下错误:RunTimeerror: grad can be hidden created only for scalar output"
.
Unfortunately, I’ve been making tests with torch.autograd.grad()
, but I could not figure out how to do it. I get errors like: "RunTimeerror: grad can be implicitly created only for scalar outputs"
.
如果我想知道 Y w.r.t. 的梯度,torch.autograd.grad()
中的输入应该是什么?X?
What should be the inputs in torch.autograd.grad()
if I want to know the gradient of Y w.r.t. X?
推荐答案
让我们从简单的工作示例开始,使用简单的损失函数和正则向后.我们将构建简短的计算图并对其进行一些梯度计算.
Let's start from simple working example with plain loss function and regular backward. We will build short computational graph and do some grad computations on it.
代码:
import torch
from torch.autograd import grad
import torch.nn as nn
# Create some dummy data.
x = torch.ones(2, 2, requires_grad=True)
gt = torch.ones_like(x) * 16 - 0.5 # "ground-truths"
# We will use MSELoss as an example.
loss_fn = nn.MSELoss()
# Do some computations.
v = x + 2
y = v ** 2
# Compute loss.
loss = loss_fn(y, gt)
print(f'Loss: {loss}')
# Now compute gradients:
d_loss_dx = grad(outputs=loss, inputs=x)
print(f'dloss/dx:
{d_loss_dx}')
输出:
Loss: 42.25
dloss/dx:
(tensor([[-19.5000, -19.5000], [-19.5000, -19.5000]]),)
好的,这有效!现在让我们尝试重现错误只能为标量输出隐式创建 grad".如您所见,前面示例中的 loss 是一个标量.backward()
和 grad()
默认处理单个标量值:loss.backward(torch.tensor(1.))
.如果您尝试传递具有更多值的张量,则会出现错误.
Ok, this works! Now let's try to reproduce error "grad can be implicitly created only for scalar outputs". As you can notice, loss in previous example is a scalar. backward()
and grad()
by defaults deals with single scalar value: loss.backward(torch.tensor(1.))
. If you try to pass tensor with more values you will get an error.
代码:
v = x + 2
y = v ** 2
try:
dy_hat_dx = grad(outputs=y, inputs=x)
except RuntimeError as err:
print(err)
输出:
grad 只能为标量输出隐式创建
因此,在使用grad()
时需要指定grad_outputs
参数如下:
Therefore, when using grad()
you need to specify grad_outputs
parameter as follows:
代码:
v = x + 2
y = v ** 2
dy_dx = grad(outputs=y, inputs=x, grad_outputs=torch.ones_like(y))
print(f'dy/dx:
{dy_dx}')
dv_dx = grad(outputs=v, inputs=x, grad_outputs=torch.ones_like(v))
print(f'dv/dx:
{dv_dx}')
输出:
dy/dx:
(tensor([[6., 6.],[6., 6.]]),)
dv/dx:
(tensor([[1., 1.], [1., 1.]]),)
注意:如果您使用的是 backward()
,只需执行 y.backward(torch.ones_like(y))
.
NOTE: If you are using backward()
instead, simply do y.backward(torch.ones_like(y))
.
这篇关于Autograd.grad() 用于 pytorch 中的 Tensor的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!