torch7:如何展平张量? [英] torch7: How to flatten a Tensor?

查看:1088
本文介绍了torch7:如何展平张量?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想展平任何常规的 n torch.Tensor,但要采用一种计算优化的方式. (在这里通过展平",我的意思是将给定的Tensor转换为具有与给定的向量相同数量的元素的一维Tensor.)我目前正在使用以下步骤:

I want to flatten any general n-dimensional torch.Tensor but in a way which is computationally optimized. (By "flatten" here, I mean converting a given Tensor to a one-dimensional Tensor which has the same number of elements as the given vector.) I am using the following steps currently to do so:

local original_tensor = -- output of some intermediate layer of a conv-net residing in the GPU
local shaping_tensor = torch.Tensor(original_tensor:nElement())
original_tensor = original_tensor:resizeAs(shaping_tensor:cuda())

我相信这是因为:cuda()效率不高,因为它将新的Tensor从内存推到了GPU.有人可以建议一种更有效的方法吗?

I believe it is slightly inefficient because of :cuda() which pushes this new Tensor from memory to the GPU. Can someone please suggest a more efficient way to do this?

提前谢谢.

推荐答案

典型的方法是创建一个视图(因此实际上不会重塑张量).

Typical approach is to create a view (thus not actually reshaping the tensor).

x:view(x:nElement())

直接来自官方的"numpy用户的火炬" https://github.com/torch/torch7/wiki/Torch-for-Numpy-users

which comes directly from official "torch for numpy users" https://github.com/torch/torch7/wiki/Torch-for-Numpy-users

这篇关于torch7:如何展平张量?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆