torch.optim 返回“ValueError:无法优化非叶张量";对于多维张量 [英] torch.optim returns "ValueError: can't optimize a non-leaf Tensor" for multidimensional tensor

查看:432
本文介绍了torch.optim 返回“ValueError:无法优化非叶张量";对于多维张量的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用 torch.optim.adam 优化场景顶点的平移.它是 redner 教程系列中的一段代码,在初始设置下运行良好.它尝试通过将所有顶点移动相同的值来优化场景,称为 translation.这是原始代码:

I am trying to optimize the translations of the vertices of a scene with torch.optim.adam. It is a code piece from the redner tutorial series, which works fine with the initial setting. It tries to optimize a scene with shifting all the vertices by the same value called translation. Here is the original code:

vertices = []
for obj in base:
    vertices.append(obj.vertices.clone())

def model(translation):
    for obj, v in zip(base, vertices):
        obj.vertices = v + translation
    # Assemble the 3D scene.
    scene = pyredner.Scene(camera = camera, objects = objects)
    # Render the scene.
    img = pyredner.render_albedo(scene)
    return img

# Initial guess
# Set requires_grad=True since we want to optimize them later

translation = torch.tensor([10.0, -10.0, 10.0], device = pyredner.get_device(), requires_grad=True)

init = model(translation)
# Visualize the initial guess

t_optimizer = torch.optim.Adam([translation], lr=0.5)

我尝试修改代码,使其为每个顶点计算单独的平移.为此,我对上面的代码进行了以下修改,这使得 translation 的形状从 torch.Size([3])torch.Size([43380, 3]):

I tried to modify the code such that it calculates an individual translation for each of the vertices. For this I applied the following modifications to the code above, that makes the shape of the translation from torch.Size([3]) to torch.Size([43380, 3]):

# translation = torch.tensor([10.0, -10.0, 10.0], device = pyredner.get_device(), requires_grad=True)
translation = base[0].vertices.clone().detach().requires_grad_(True)
translation[:] = 10.0

这引入了ValueError: can't optimization a non-leaf Tensor.你能帮我解决这个问题吗.

This introduces the ValueError: can't optimize a non-leaf Tensor. Could you please help me work around the problem.

PS:很抱歉文字太长,我对这个主题很陌生,我想尽可能全面地说明问题.

PS: I am sorry for the long text, I am very new to this subject, and I wanted to state the problem as comprehensive as possible.

推荐答案

只能优化叶张量.叶张量是在图的开头创建的张量,即图中没有跟踪操作来生成它.换句话说,当您使用 requires_grad=True 对张量应用任何操作时,它会跟踪这些操作以稍后进行反向传播.您不能将这些中间结果之一提供给优化器.

Only leaf tensors can be optimised. A leaf tensor is a tensor that was created at the beginning of a graph, i.e. there is no operation tracked in the graph to produce it. In other words, when you apply any operation to a tensor with requires_grad=True it keeps track of these operations to do the back propagation later. You cannot give one of these intermediate results to the optimiser.

一个例子可以更清楚地说明这一点:

An example shows that more clearly:

weight = torch.randn((2, 2), requires_grad=True)
# => tensor([[ 1.5559,  0.4560],
#            [-1.4852, -0.8837]], requires_grad=True)

weight.is_leaf # => True

result = weight * 2
# => tensor([[ 3.1118,  0.9121],
#            [-2.9705, -1.7675]], grad_fn=<MulBackward0>)
# grad_fn defines how to do the back propagation (kept track of the multiplication)

result.is_leaf # => False

本示例中的 result 无法优化,因为它不是叶张量.同样,在您的情况下 translation 不是叶张量,因为您在创建后执行的操作:

The result in this example, cannot be optimised, since it's not a leaf tensor. Similarly, in your case translation is not a leaf tensor because of the operation you perform after it was created:

translation[:] = 10.0
translation.is_leaf # => False

这有 grad_fn= 因此它不是叶子,您不能将它传递给优化器.为避免这种情况,您必须从中创建一个与图形分离的新张量.

This has grad_fn=<CopySlices> therefore it's not a leaf and you cannot pass it to the optimiser. To avoid that, you would have to create a new tensor from it that is detached from the graph.

# Not setting requires_grad, so that the next operation is not tracked
translation = base[0].vertices.clone().detach()
translation[:] = 10.0
# Now setting requires_grad so it is tracked in the graph and can be optimised
translation = translation.requires_grad_(True)

您在这里真正要做的是创建一个新的张量,其值为 10.0,其大小与顶点张量相同.使用torch.full_like一个>

What you're really doing here, is create a new tensor filled with the value 10.0 with the same size as the vertices tensor. This can be achieved much easier with torch.full_like

translation = torch.full_like(base[0],vertices, 10.0, requires_grad=True)

这篇关于torch.optim 返回“ValueError:无法优化非叶张量";对于多维张量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆