在pytorch中使用交叉熵损失时,我应该使用softmax作为输出吗? [英] Should I use softmax as output when using cross entropy loss in pytorch?
问题描述
我在对 pytorch 中的 MNIST 数据集的具有 2 个隐藏层的全连接深度神经网络进行分类时遇到问题.
I have a problem with classifying fully connected deep neural net with 2 hidden layers for MNIST dataset in pytorch.
我想在两个隐藏层中都使用 tanh 作为激活函数,但最后我应该使用 softmax.
I want to use tanh as activations in both hidden layers, but in the end, I should use softmax.
对于损失,我在 PyTORch 中选择 nn.CrossEntropyLoss()
,它(正如我发现的那样)不想将单热编码标签作为真正的标签,而是采用 LongTensor而不是课程.
For the loss, I am choosing nn.CrossEntropyLoss()
in PyTOrch, which (as I have found out) does not want to take one-hot encoded labels as true labels, but takes LongTensor of classes instead.
我的模型是 nn.Sequential()
并且当我最终使用 softmax 时,它在测试数据的准确性方面给了我更糟糕的结果.为什么?
My model is nn.Sequential()
and when I am using softmax in the end, it gives me worse results in terms of accuracy on testing data. Why?
import torch
from torch import nn
inputs, n_hidden0, n_hidden1, out = 784, 128, 64, 10
n_epochs = 500
model = nn.Sequential(
nn.Linear(inputs, n_hidden0, bias=True),
nn.Tanh(),
nn.Linear(n_hidden0, n_hidden1, bias=True),
nn.Tanh(),
nn.Linear(n_hidden1, out, bias=True),
nn.Softmax() # SHOULD THIS BE THERE?
)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.5)
for epoch in range(n_epochs):
y_pred = model(X_train)
loss = criterion(y_pred, Y_train)
print('epoch: ', epoch+1,' loss: ', loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
推荐答案
如中所述torch.nn.CrossEntropyLoss()
文档:
这个标准结合了nn.LogSoftmax()
和 nn.NLLLoss()
合二为一班级.
This criterion combines
nn.LogSoftmax()
andnn.NLLLoss()
in one single class.
因此,您之前应该不要使用 softmax.
Therefore, you should not use softmax before.
这篇关于在pytorch中使用交叉熵损失时,我应该使用softmax作为输出吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!