如何在 pytorch 中展平张量? [英] How do I flatten a tensor in pytorch?
问题描述
给定一个多维张量,我如何将其展平以使其具有单一维度?
例如:
<预><代码>>>>t = torch.rand([2, 3, 5])>>>t.shapetorch.Size([2, 3, 5])如何将其展平以具有形状:
torch.Size([30])
TL;DR: torch.flatten()
使用torch.flatten()
在 v0.4.1 中引入并记录在 v1.0rc1:
对于 v0.4.1 及更早版本,使用 t.reshape(-1)
.
使用t.reshape(-1)
:
如果请求的视图在内存中是连续的这相当于 t.view(-1)
并且内存不会被复制.
否则它将等同于 t.
contiguous()
.view(-1)
.
其他非选项:
t.view(-1)
不会复制内存,但可能无法工作,具体取决于原始大小和步幅t.resize(-1)
给出RuntimeError
(见下文)t.resize(t.numel())
警告是一种低级方法(见下面的讨论)
(注意:pytorch
的 reshape()
可能会改变数据,但 numpy
的 reshape()
不会.)
t.resize(t.numel())
需要一些讨论.torch.Tensor.resize_
文档 说:
存储被重新解释为 C-contiguous,忽略当前步幅(除非目标大小等于当前大小,在这种情况下张量保持不变)
鉴于当前的步幅将被新的 (1, numel())
大小忽略,元素的顺序可能出现在与 <代码>重塑(-1)代码>.但是,大小"可能表示内存大小,而不是张量的大小.
如果 t.resize(-1)
既方便又高效,那就太好了,但是对于 torch 1.0.1.post2
,t =火炬.rand([2, 3, 5]);t.resize(-1)
给出:
RuntimeError: 请求将大小调整为 -1(总共 -1 个元素),但是给定的张量的大小为 2x2(4 个元素).autograd 的调整大小只能改变给定张量的形状,同时保留元素的数量.
我在此处提出了一个功能请求,但大家一致认为resize()
是一种低级方法,应该优先使用 reshape()
.
Given a tensor of multiple dimensions, how do I flatten it so that it has a single dimension?
Eg:
>>> t = torch.rand([2, 3, 5])
>>> t.shape
torch.Size([2, 3, 5])
How do I flatten it to have shape:
torch.Size([30])
TL;DR: torch.flatten()
Use torch.flatten()
which was introduced in v0.4.1 and documented in v1.0rc1:
>>> t = torch.tensor([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) >>> torch.flatten(t) tensor([1, 2, 3, 4, 5, 6, 7, 8]) >>> torch.flatten(t, start_dim=1) tensor([[1, 2, 3, 4], [5, 6, 7, 8]])
For v0.4.1 and earlier, use t.reshape(-1)
.
With t.reshape(-1)
:
If the requested view is contiguous in memory
this will equivalent to t.view(-1)
and memory will not be copied.
Otherwise it will be equivalent to t.
contiguous()
.view(-1)
.
Other non-options:
t.view(-1)
won't copy memory, but may not work depending on original size and stridet.resize(-1)
givesRuntimeError
(see below)t.resize(t.numel())
warning about being a low-level method (see discussion below)
(Note: pytorch
's reshape()
may change data but numpy
's reshape()
won't.)
t.resize(t.numel())
needs some discussion. The torch.Tensor.resize_
documentation says:
The storage is reinterpreted as C-contiguous, ignoring the current strides (unless the target size equals the current size, in which case the tensor is left unchanged)
Given the current strides will be ignored with the new (1, numel())
size, the order of the elements may apppear in a different order than with reshape(-1)
. However, "size" may mean the memory size, rather than the tensor's size.
It would be nice if t.resize(-1)
worked for both convenience and efficiency, but with torch 1.0.1.post2
, t = torch.rand([2, 3, 5]); t.resize(-1)
gives:
RuntimeError: requested resize to -1 (-1 elements in total), but the given
tensor has a size of 2x2 (4 elements). autograd's resize can only change the
shape of a given tensor, while preserving the number of elements.
I raised a feature request for this here, but the consensus was that resize()
was a low level method, and reshape()
should be used in preference.
这篇关于如何在 pytorch 中展平张量?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!