Pytorch RuntimeError:“ host_softmax”;没有为'torch.cuda.LongTensor'实现 [英] Pytorch RuntimeError: "host_softmax" not implemented for 'torch.cuda.LongTensor'

查看:409
本文介绍了Pytorch RuntimeError:“ host_softmax”;没有为'torch.cuda.LongTensor'实现的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用pytorch训练模型。但是在计算交叉熵损失时遇到了运行时错误。

I am using pytorch for training models. But I got an runtime error when it was computing the cross-entropy loss.

Traceback (most recent call last):
  File "deparser.py", line 402, in <module>
    d.train()
  File "deparser.py", line 331, in train
    total, correct, avgloss = self.train_util()
  File "deparser.py", line 362, in train_util
    loss = self.step(X_train, Y_train, correct, total)
  File "deparser.py", line 214, in step
    loss = nn.CrossEntropyLoss()(out.long(), y)
  File "/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/modules/loss.py", line 862, in forward
    ignore_index=self.ignore_index, reduction=self.reduction)
  File "/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/functional.py", line 1550, in cross_entropy
    return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
  File "/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/functional.py", line 975, in log_softmax
    return input.log_softmax(dim)
RuntimeError: "host_softmax" not implemented for 'torch.cuda.LongTensor'

我认为这是因为 .cuda()函数或 torch.Float torch.Long 。但是我尝试了很多方法通过 .cpu() / .cuda()和<$ c来更改变量$ c> .long() / .float(),但仍然无法正常工作。在Google上搜索时找不到此错误消息。谁能帮我?谢谢!!!

I think this is because the .cuda() function or conversion between torch.Float and torch.Long. But I have tried many ways to change the variable by .cpu()/.cuda() and .long()/.float(), but it still not work. This error message can't be found when searching it on google. Can anyone helps me? Thanks!!!

这是代码原因错误:

def step(self, x, y, correct, total):
    self.optimizer.zero_grad()
    out = self.forward(*x)
    loss = nn.CrossEntropyLoss()(out.long(), y)
    loss.backward()
    self.optimizer.step()
    _, predicted = torch.max(out.data, 1)
    total += y.size(0)
    correct += int((predicted == y).sum().data)
    return loss.data

然后调用此函数step():

And this function step() is called by:

def train_util(self):
    total = 0
    correct = 0
    avgloss = 0
    for i in range(self.step_num_per_epoch):
        X_train, Y_train = self.trainloader()
        self.optimizer.zero_grad()
        if torch.cuda.is_available():
            self.cuda()
            for i in range(len(X_train)):
                X_train[i] = Variable(torch.from_numpy(X_train[i]))
                X_train[i].requires_grad = False
                X_train[i] = X_train[i].cuda()
            Y_train = torch.from_numpy(Y_train)
            Y_train.requires_grad = False
            Y_train = Y_train.cuda()
        loss = self.step(X_train, Y_train, correct, total)
        avgloss+=float(loss)*Y_train.size(0)
        self.optimizer.step()
        if i%100==99:
            print('STEP %d, Loss: %.4f, Acc: %.4f'%(i+1,loss,correct/total))

    return total, correct, avgloss/self.data_len

输入数据 X_train,Y_train = self.trainloader()是开头的numpy数组。

The input data X_train, Y_train = self.trainloader() are numpy arrays at begining.

这是一个数据示例:

>>> X_train, Y_train = d.trainloader()
>>> X_train[0].dtype
dtype('int64')
>>> X_train[1].dtype
dtype('int64')
>>> X_train[2].dtype
dtype('int64')
>>> Y_train.dtype
dtype('float32')
>>> X_train[0]
array([[   0,    6,    0, ...,    0,    0,    0],
       [   0, 1944, 8168, ...,    0,    0,    0],
       [   0,  815,  317, ...,    0,    0,    0],
       ...,
       [   0,    0,    0, ...,    0,    0,    0],
       [   0,   23,    6, ...,    0,    0,    0],
       [   0,    0,  297, ...,    0,    0,    0]])
>>> X_train[1]
array([ 6,  7,  8, 21,  2, 34,  3,  4, 19, 14, 15,  2, 13,  3, 11, 22,  4,
   13, 34, 10, 13,  3, 48, 18, 16, 19, 16, 17, 48,  3,  3, 13])
>>> X_train[2]
array([ 4,  5,  8, 36,  2, 33,  5,  3, 17, 16, 11,  0,  9,  3, 10, 20,  1,
   14, 33, 25, 19,  1, 46, 17, 14, 24, 15, 15, 51,  2,  1, 14])
>>> Y_train
array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
       [0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
       [1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       ...,
       [0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
      dtype=float32)

尝试所有可能的组合:

情况1:

loss = nn.CrossEntropyLoss()(out,y )

我得到:

RuntimeError:预期类型为torch.cuda.LongTensor的对象,但发现类型为torch.cuda.FloatTensor对于参数#2'目标'

情况2:

损失= nn.CrossEntropyLoss ()(out.long(),y)

作为描述a bove

case 2:
loss = nn.CrossEntropyLoss()(out.long(), y)
as description above

情况3:

损失= nn.CrossEntropyLoss()(out.float(),y)

我得到:

RuntimeError:类型为torch.cuda.LongTensor的预期对象,但为参数#找到类型为torch.cuda.FloatTensor的对象2'target'

情况4:

损失= nn.CrossEntropyLoss()(出来,y.long())

我得到:

RuntimeError:/ pytorch / aten不支持多目标/src/THCUNN/generic/ClassNLLCriterion.cu:15

情况5:

损失= nn.CrossEntropyLoss()(out.long(),y.long())

我得到:

RuntimeError: torch.cuda.LongTensor未实现 host_softmax

案例6:

损失= nn.CrossEntropyLoss()(out.float(),y.long())

我得到:

RuntimeError:/pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15

case不支持多目标7:

损失= nn.Cr ossEntropyLoss()(out,y.float())

我得到:

RuntimeError:火炬类型的预期对象。 cuda.LongTensor,但为参数#2'target'

case 7:
loss = nn.CrossEntropyLoss()(out, y.float())
I get:
RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.cuda.FloatTensor for argument #2 'target'

case 8:

找到类型torch.cuda.FloatTensor。 损失= nn.CrossEntropyLoss()(out.long(),y.float())

我得到:

RuntimeError: torch.cuda.LongTensor未实现 host_softmax

案例9:

损失= nn.CrossEntropyLoss()(out.float(),y.float())

我得到:

RuntimeError:类型为torch.cuda.LongTensor的预期对象,但为参数#2'target'找到了torch.cuda.FloatTensor类型。

推荐答案

我知道问题出在哪里。

I know where the problem is.

y 应该是 torch.int64 dtype,不带一键编码。
CrossEntropyLoss()会自动使用一热编码(而预测的概率分布类似于一热格式)。

y should be in torch.int64 dtype without one-hot encoding. And CrossEntropyLoss() will auto encoding it with one-hot (while out is the probability distribution of prediction like one-hot format).

它现在可以运行!

这篇关于Pytorch RuntimeError:“ host_softmax”;没有为'torch.cuda.LongTensor'实现的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆