pytorch复制张量的首选方法 [英] Pytorch preferred way to copy a tensor

查看:478
本文介绍了pytorch复制张量的首选方法的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

似乎有几种方法可以在Pytorch中创建张量副本,包括

  y = tensor.new_tensor( x)#a 

y = x.clone()。detach()#b

y = torch.empty_like(x).copy_(x)#c

y = torch.tensor(x)#d

b <根据执行以下任一操作时得到的用户警告,/ code>明显优于 a d code> a d 。为什么首选它?性能?我认为它的可读性较差。



是否/反对使用 c

解决方案

TL; DR


使用 .clone()。detach()(或者最好是 .detach()。clone()


如果首先分离张量然后克隆它,则不会复制计算路径,反之将其复制并放弃。因此, .detach()。clone()效率更高。-


注意:在多次运行中,我注意到了在b,c,e中,任何方法都可以拥有最短的时间。 a和d也是如此。但是,方法b,c,e始终比a和d的时序要短。

 进口火炬
进口perfplot

perfplot.show(
setup = lambda n:torch.randn(n),
kernels = [
lambda a:a.new_tensor(a),
lambda a:a.clone()。detach(),
lambda a:torch.empty_like(a).copy_(a),
lambda a:torch.tensor(a),
lambda a:a.detach()。clone(),
],
标签= [ new_tensor(), clone()。detach(), empty_like() .copy(), tensor(), detach()。clone()],
n_range = [2 ** k对于range(15)中的k],
xlabel = len(a),
logx = False,
logy = False,
title ='复制比托托张量的时间比较',


There seems to be several ways to create a copy of a tensor in Pytorch, including

y = tensor.new_tensor(x) #a

y = x.clone().detach() #b

y = torch.empty_like(x).copy_(x) #c

y = torch.tensor(x) #d

b is explicitly preferred over a and d according to a UserWarning I get if I execute either a or d. Why is it preferred? Performance? I'd argue it's less readable.

Any reasons for/against using c?

解决方案

TL;DR

Use .clone().detach() (or preferrably .detach().clone())

If you first detach the tensor and then clone it, the computation path is not copied, the other way around it is copied and then abandoned. Thus, .detach().clone() is very slightly more efficient.-- pytorch forums

as it's slightly fast and explicit in what it does.


Using perflot, I plotted the timing of various methods to copy a pytorch tensor.

y = tensor.new_tensor(x) # method a

y = x.clone().detach() # method b

y = torch.empty_like(x).copy_(x) # method c

y = torch.tensor(x) # method d

y = x.detach().clone() # method e

The x-axis is the dimension of tensor created, y-axis shows the time. The graph is in linear scale. As you can clearly see, the tensor() or new_tensor() takes more time compared to other three methods.

Note: In multiple runs, I noticed that out of b, c, e, any method can have lowest time. The same is true for a and d. But, the methods b, c, e consistently have lower timing than a and d.

import torch
import perfplot

perfplot.show(
    setup=lambda n: torch.randn(n),
    kernels=[
        lambda a: a.new_tensor(a),
        lambda a: a.clone().detach(),
        lambda a: torch.empty_like(a).copy_(a),
        lambda a: torch.tensor(a),
        lambda a: a.detach().clone(),
    ],
    labels=["new_tensor()", "clone().detach()", "empty_like().copy()", "tensor()", "detach().clone()"],
    n_range=[2 ** k for k in range(15)],
    xlabel="len(a)",
    logx=False,
    logy=False,
    title='Timing comparison for copying a pytorch tensor',
)

这篇关于pytorch复制张量的首选方法的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆