怎么把Pytorch autograd.Variable转换成Numpy? [英] How to convert Pytorch autograd.Variable to Numpy?

查看:254
本文介绍了怎么把Pytorch autograd.Variable转换成Numpy?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

标题说明了一切.我想将PyTorch autograd.Variable转换为其等效的numpy数组.在他们的官方文档中,他们主张使用a.numpy()获得等效的numpy数组(对于PyTorch tensor).但这给了我以下错误:

The title says it all. I want to convert a PyTorch autograd.Variable to its equivalent numpy array. In their official documentation they advocated using a.numpy() to get the equivalent numpy array (for PyTorch tensor). But this gives me the following error:

回溯(最近一次通话最后一次):文件"stdin",模块中的第1行 文件 "/home/bishwajit/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", 第63行,在 getattr 中引发AttributeError(name)AttributeError: numpy

Traceback (most recent call last): File "stdin", line 1, in module File "/home/bishwajit/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 63, in getattr raise AttributeError(name) AttributeError: numpy

有什么办法可以避免这个问题?

Is there any way I can circumvent this?

推荐答案

两种可能的情况

  • 使用GPU:如果您尝试将cuda float张量直接转换为numpy(如下所示),则会引发错误.

  • Using GPU: If you try to convert a cuda float-tensor directly to numpy like shown below,it will throw an error.

x.data.numpy()

x.data.numpy()

RuntimeError:不支持FloatTensor的numpy转换

因此,您不能将cuda浮点张量直接转换为numpy,相反,您必须先将其转换为cpu浮点张量,然后尝试转换为numpy,如下所示.

So, you cant covert a cuda float-tensor directly to numpy, instead you have to convert it into a cpu float-tensor first, and try converting into numpy, like shown below.

x.data.cpu().numpy()

x.data.cpu().numpy()

  • 使用CPU :直接转换CPU张量.

  • Using CPU: Converting a CPU tensor is straight forward.

    x.data.numpy()

    x.data.numpy()

  • 这篇关于怎么把Pytorch autograd.Variable转换成Numpy?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆