PyTorch:提供的尺寸数 (0) 必须大于或等于张量中的维数 (1) [英] PyTorch: The number of sizes provided (0) must be greater or equal to the number of dimensions in the tensor (1)

查看:127
本文介绍了PyTorch:提供的尺寸数 (0) 必须大于或等于张量中的维数 (1)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用 Pytorch 将 CPU 模型转换为 GPU,但我遇到了问题.我在 Colab 上运行它,我确定 Pytorch 检测到 GPU.这是一个深度 Q 网络 (RL).

I'm trying to convert a CPU model to GPU using Pytorch, but I'm running into issues. I'm running this on Colab and I'm sure that Pytorch detects a GPU. This is a deep Q network (RL).

我声明我的网络为:Q = Q_Network(input_size, hidden_​​size, output_size).to(device)

当我试图通过网络传递参数时遇到了一个问题(它需要输入 cuda,但输入了 cpu),所以我添加了 .to(device):

I ran into an issue when I tried to pass arguments through the network (It expected type cuda but got type cpu) so I add .to(device):

batch = np.array(shuffled_memory[i:i+batch_size])
b_pobs = np.array(batch[:, 0].tolist(), dtype=np.float32).reshape(batch_size, -1)
b_pact = np.array(batch[:, 1].tolist(), dtype=np.int32)
b_reward = np.array(batch[:, 2].tolist(), dtype=np.int32)
b_obs = np.array(batch[:, 3].tolist(), dtype=np.float32).reshape(batch_size, -1)
b_done = np.array(batch[:, 4].tolist(), dtype=np.bool)

q = Q(torch.from_numpy(b_pobs).to(device))
q_ = Q_ast(torch.from_numpy(b_obs).to(device))

maxq = torch.max(q_.data,axis=1)
target = copy.deepcopy(q.data)

for j in range(batch_size):
    print(target[j, b_pact[j]].shape) # torch.Size([])
    target[j, b_pact[j]] = b_reward[j]+gamma*maxq[j]*(not b_done[j]) #I run into issues here

这里是错误:

RuntimeError: expand(torch.cuda.FloatTensor{[50]}, size=[]): 提供的尺寸数 (0) 必须大于或等于张量中的维数 (1)

推荐答案

target[j, b_pact[j]] 是张量的单个元素(标量,因此torch.Size([])).如果你想给它赋值,右手边只能是一个标量.事实并非如此,因为其中一个术语是具有一维(向量)的张量,即您的 maxq[j].

target[j, b_pact[j]] is a single element of the tensor (a scalar, hence size of torch.Size([])). If you want to assign anything to it, the right hand side can only be a scalar. That is not the case, as one of the terms is a tensor with 1 dimension (a vector), namely your maxq[j].

当指定维度 dim(axis 被视为同义词)到 torch.max,它将返回一个(values, index)的命名元组,其中values 包含最大值和 indices 每个最大值的位置(相当于 argmax).

When specifying a dimension dim (axis is treated as a synonym) to torch.max, it will return a named tuple of (values, indices), where values contains the maximum values and indices the location of each of the maximum values (equivalent to argmax).

maxq[j] 不是索引到最大值,而是 (values,indices) 的元组.如果您只想要这些值,您可以使用以下方法之一从元组中获取值(所有这些都是等价的,您可以使用您喜欢的任何一个):

maxq[j] is not indexing into the maximum values, but rather the tuple of (values, indices). If you only want the values you can use one of the following to get the values out of the tuple (all of them are equivalent, you can use whichever you prefer):

# Destructure/unpack and ignore the indices
maxq, _ = torch.max(q_.data,axis=1)

# Access first element of the tuple
maxq = torch.max(q_.data,axis=1)[0]

# Access `values` of the named tuple
maxq  = torch.max(q_.data,axis=1).values

这篇关于PyTorch:提供的尺寸数 (0) 必须大于或等于张量中的维数 (1)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆