Pytorch 运行时错误:“host_softmax"未为“torch.cuda.LongTensor"实现 [英] Pytorch RuntimeError: "host_softmax" not implemented for 'torch.cuda.LongTensor'
问题描述
我正在使用 pytorch 来训练模型.但是我在计算交叉熵损失时遇到了运行时错误.
回溯(最近一次调用最后一次): 中的文件deparser.py",第 402 行d.train()文件deparser.py",第 331 行,在火车中总计,正确,avgloss = self.train_util()文件deparser.py",第 362 行,在 train_util 中损失 = self.step(X_train,Y_train,正确,总计)文件deparser.py",第 214 行,步骤损失 = nn.CrossEntropyLoss()(out.long(), y)文件/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/modules/module.py",第 477 行,在 __call__ 中结果 = self.forward(*input, **kwargs)文件/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/modules/loss.py",第862行,向前ignore_index=self.ignore_index,reduction=self.reduction)文件/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/functional.py",第1550行,cross_entropyreturn nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)log_softmax 中的文件/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/functional.py",第 975 行返回 input.log_softmax(dim)运行时错误:host_softmax"未为torch.cuda.LongTensor"实现
我认为这是因为 .cuda()
函数或 torch.Float
和 torch.Long
之间的转换.但是我尝试了很多方法来通过 .cpu()
/.cuda()
和 .long()
/ 更改变量.float()
,但还是不行.在谷歌上搜索时找不到此错误消息.任何人都可以帮助我吗?谢谢!!!
这是导致错误的代码:
def step(self, x, y,correct, total):self.optimizer.zero_grad()out = self.forward(*x)损失 = nn.CrossEntropyLoss()(out.long(), y)损失.向后()self.optimizer.step()_, 预测 = torch.max(out.data, 1)总计 += y.size(0)正确 += int((预测 == y).sum().data)回波损耗数据
这个函数 step() 被调用:
def train_util(self):总计 = 0正确 = 0平均光泽 = 0对于我在范围内(self.step_num_per_epoch):X_train, Y_train = self.trainloader()self.optimizer.zero_grad()如果 torch.cuda.is_available():self.cuda()对于我在范围内(len(X_train)):X_train[i] = 变量(torch.from_numpy(X_train[i]))X_train[i].requires_grad = FalseX_train[i] = X_train[i].cuda()Y_train = torch.from_numpy(Y_train)Y_train.requires_grad = FalseY_train = Y_train.cuda()损失 = self.step(X_train,Y_train,正确,总计)avgloss+=float(loss)*Y_train.size(0)self.optimizer.step()如果 i%100==99:print('STEP %d, 损失: %.4f, Acc: %.4f'%(i+1,loss,correct/total))返回总计,正确,avgloss/self.data_len
输入数据 X_train, Y_train = self.trainloader()
一开始是 numpy 数组.
这是一个数据样本:
<预><代码>>>>X_train, Y_train = d.trainloader()>>>X_train[0].dtypedtype('int64')>>>X_train[1].dtypedtype('int64')>>>X_train[2].dtypedtype('int64')>>>Y_train.dtypedtype('float32')>>>X_train[0]数组([[ 0, 6, 0, ..., 0, 0, 0],[ 0, 1944, 8168, ..., 0, 0, 0],[ 0, 815, 317, ..., 0, 0, 0],...,[ 0, 0, 0, ..., 0, 0, 0],[ 0, 23, 6, ..., 0, 0, 0],[ 0, 0, 297, ..., 0, 0, 0]])>>>X_train[1]数组([ 6, 7, 8, 21, 2, 34, 3, 4, 19, 14, 15, 2, 13, 3, 11, 22, 4,13, 34, 10, 13, 3, 48, 18, 16, 19, 16, 17, 48, 3, 3, 13])>>>X_train[2]数组([ 4, 5, 8, 36, 2, 33, 5, 3, 17, 16, 11, 0, 9, 3, 10, 20, 1,14, 33, 25, 19, 1, 46, 17, 14, 24, 15, 15, 51, 2, 1, 14])>>>Y_train数组([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],[0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],[1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,...,[0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],dtype=float32)尝试所有可能的组合:
案例1:loss = nn.CrossEntropyLoss()(out, y)
我得到:RuntimeError: 类型为 torch.cuda.LongTensor 的预期对象,但发现参数 #2 'target' 的类型为 torch.cuda.FloatTensor
情况2:loss = nn.CrossEntropyLoss()(out.long(), y)
如上所述
案例3:loss = nn.CrossEntropyLoss()(out.float(), y)
我得到:RuntimeError: 类型为 torch.cuda.LongTensor 的预期对象,但发现参数 #2 'target' 的类型为 torch.cuda.FloatTensor
案例4:loss = nn.CrossEntropyLoss()(out, y.long())
我得到:运行时错误:/pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15 不支持多目标
案例5:loss = nn.CrossEntropyLoss()(out.long(), y.long())
我得到:RuntimeError: "host_softmax" 没有为 'torch.cuda.LongTensor' 实现
案例6:loss = nn.CrossEntropyLoss()(out.float(), y.long())
我得到:运行时错误:/pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15 不支持多目标
案例7:loss = nn.CrossEntropyLoss()(out, y.float())
我得到:RuntimeError: 类型为 torch.cuda.LongTensor 的预期对象,但发现参数 #2 'target' 的类型为 torch.cuda.FloatTensor
案例8:loss = nn.CrossEntropyLoss()(out.long(), y.float())
我得到:RuntimeError: "host_softmax" 没有为 'torch.cuda.LongTensor' 实现
案例9:loss = nn.CrossEntropyLoss()(out.float(), y.float())
我得到:RuntimeError: 类型为 torch.cuda.LongTensor 的预期对象,但发现参数 #2 'target' 的类型为 torch.cuda.FloatTensor
我知道问题出在哪里.
y
应该在 torch.int64
dtype 中,没有单热编码.并且 CrossEntropyLoss()
将自动编码为 one-hot(而 out 是预测的概率分布,如 one-hot 格式).
现在可以运行了!
I am using pytorch for training models. But I got an runtime error when it was computing the cross-entropy loss.
Traceback (most recent call last):
File "deparser.py", line 402, in <module>
d.train()
File "deparser.py", line 331, in train
total, correct, avgloss = self.train_util()
File "deparser.py", line 362, in train_util
loss = self.step(X_train, Y_train, correct, total)
File "deparser.py", line 214, in step
loss = nn.CrossEntropyLoss()(out.long(), y)
File "/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/modules/loss.py", line 862, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/functional.py", line 1550, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/functional.py", line 975, in log_softmax
return input.log_softmax(dim)
RuntimeError: "host_softmax" not implemented for 'torch.cuda.LongTensor'
I think this is because the .cuda()
function or conversion between torch.Float
and torch.Long
. But I have tried many ways to change the variable by .cpu()
/.cuda()
and .long()
/.float()
, but it still not work. This error message can't be found when searching it on google. Can anyone helps me? Thanks!!!
This is the code cause error:
def step(self, x, y, correct, total):
self.optimizer.zero_grad()
out = self.forward(*x)
loss = nn.CrossEntropyLoss()(out.long(), y)
loss.backward()
self.optimizer.step()
_, predicted = torch.max(out.data, 1)
total += y.size(0)
correct += int((predicted == y).sum().data)
return loss.data
And this function step() is called by:
def train_util(self):
total = 0
correct = 0
avgloss = 0
for i in range(self.step_num_per_epoch):
X_train, Y_train = self.trainloader()
self.optimizer.zero_grad()
if torch.cuda.is_available():
self.cuda()
for i in range(len(X_train)):
X_train[i] = Variable(torch.from_numpy(X_train[i]))
X_train[i].requires_grad = False
X_train[i] = X_train[i].cuda()
Y_train = torch.from_numpy(Y_train)
Y_train.requires_grad = False
Y_train = Y_train.cuda()
loss = self.step(X_train, Y_train, correct, total)
avgloss+=float(loss)*Y_train.size(0)
self.optimizer.step()
if i%100==99:
print('STEP %d, Loss: %.4f, Acc: %.4f'%(i+1,loss,correct/total))
return total, correct, avgloss/self.data_len
The input data X_train, Y_train = self.trainloader()
are numpy arrays at begining.
This is a data sample:
>>> X_train, Y_train = d.trainloader()
>>> X_train[0].dtype
dtype('int64')
>>> X_train[1].dtype
dtype('int64')
>>> X_train[2].dtype
dtype('int64')
>>> Y_train.dtype
dtype('float32')
>>> X_train[0]
array([[ 0, 6, 0, ..., 0, 0, 0],
[ 0, 1944, 8168, ..., 0, 0, 0],
[ 0, 815, 317, ..., 0, 0, 0],
...,
[ 0, 0, 0, ..., 0, 0, 0],
[ 0, 23, 6, ..., 0, 0, 0],
[ 0, 0, 297, ..., 0, 0, 0]])
>>> X_train[1]
array([ 6, 7, 8, 21, 2, 34, 3, 4, 19, 14, 15, 2, 13, 3, 11, 22, 4,
13, 34, 10, 13, 3, 48, 18, 16, 19, 16, 17, 48, 3, 3, 13])
>>> X_train[2]
array([ 4, 5, 8, 36, 2, 33, 5, 3, 17, 16, 11, 0, 9, 3, 10, 20, 1,
14, 33, 25, 19, 1, 46, 17, 14, 24, 15, 15, 51, 2, 1, 14])
>>> Y_train
array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
...,
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
dtype=float32)
Try all possible combinations:
case 1:
loss = nn.CrossEntropyLoss()(out, y)
I get:
RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.cuda.FloatTensor for argument #2 'target'
case 2:
loss = nn.CrossEntropyLoss()(out.long(), y)
as description above
case 3:
loss = nn.CrossEntropyLoss()(out.float(), y)
I get:
RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.cuda.FloatTensor for argument #2 'target'
case 4:
loss = nn.CrossEntropyLoss()(out, y.long())
I get:
RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15
case 5:
loss = nn.CrossEntropyLoss()(out.long(), y.long())
I get:
RuntimeError: "host_softmax" not implemented for 'torch.cuda.LongTensor'
case 6:
loss = nn.CrossEntropyLoss()(out.float(), y.long())
I get:
RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15
case 7:
loss = nn.CrossEntropyLoss()(out, y.float())
I get:
RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.cuda.FloatTensor for argument #2 'target'
case 8:
loss = nn.CrossEntropyLoss()(out.long(), y.float())
I get:
RuntimeError: "host_softmax" not implemented for 'torch.cuda.LongTensor'
case 9:
loss = nn.CrossEntropyLoss()(out.float(), y.float())
I get:
RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.cuda.FloatTensor for argument #2 'target'
I know where the problem is.
y
should be in torch.int64
dtype without one-hot encoding.
And CrossEntropyLoss()
will auto encoding it with one-hot (while out is the probability distribution of prediction like one-hot format).
It can run now!
这篇关于Pytorch 运行时错误:“host_softmax"未为“torch.cuda.LongTensor"实现的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!