PyTorch 相当于 tf.nn.softmax_cross_entropy_with_logits 和 tf.nn.sigmoid_cross_entropy_with_logits [英] PyTorch equivalent to tf.nn.softmax_cross_entropy_with_logits and tf.nn.sigmoid_cross_entropy_with_logits

查看:103
本文介绍了PyTorch 相当于 tf.nn.softmax_cross_entropy_with_logits 和 tf.nn.sigmoid_cross_entropy_with_logits的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在此处找到了帖子.在这里,我们尝试在 PyTorch 中找到 tf.nn.softmax_cross_entropy_with_logits 的等效项.答案仍然让我感到困惑.

这是Tensorflow 2 代码

 将 tensorflow 导入为 tf将 numpy 导入为 np# 这里我们假设有 5 个类的 2 个批次大小preds = np.array([[.4, 0, 0, 0.6, 0], [.8, 0, 0, 0.2, 0]])标签 = np.array([[0, 0, 0, 1.0, 0], [1.0, 0, 0, 0, 0]])tf_preds = tf.convert_to_tensor(preds, dtype=tf.float32)tf_labels = tf.convert_to_tensor(标签,dtype=tf.float32)损失 = tf.nn.softmax_cross_entropy_with_logits(logits=tf_preds,标签=tf_labels)

它给了我 loss

这是PyTorch代码

导入火炬将 numpy 导入为 nppreds = np.array([[.4, 0, 0, 0.6, 0], [.8, 0, 0, 0.2, 0]])标签 = np.array([[0, 0, 0, 1.0, 0], [1.0, 0, 0, 0, 0]])torch_preds = torch.tensor(preds).float()torch_labels = torch.tensor(labels).float()损失 = torch.nn.functional.cross_entropy(torch_preds,torch_labels)

然而,它提出了:

<块引用>

运行时错误:需要一维目标张量,不支持多目标

看来问题还是没有解决.如何在 PyTorch 中实现 tf.nn.softmax_cross_entropy_with_logits?

tf.nn.sigmoid_cross_entropy_with_logits 怎么样?

解决方案

- tf.nn.softmax_cross_entropy_with_logits

这实际上并不等同于 F.cross_entropy.后者只能处理单类分类设置.不是多类分类的更一般情况,其中标签可以由多个类组成.事实上,F.cross_entropy 将唯一的类 id 作为目标(每个实例),不是类的概率分布作为 tf.nn.softmax_cross_entropy_with_logits可以期待收到.

<预><代码>>>>logits = torch.tensor([[4.0, 2.0, 1.0], [0.0, 5.0, 1.0]])>>>标签 = torch.tensor([[1.0, 0.0, 0.0], [0.0, 0.8, 0.2]])

为了得到想要的结果,对 logits 应用 log-softmax,然后取负对数似然:

<预><代码>>>>-torch.sum(F.log_softmax(logits,dim=1)*标签,dim=1)张量([0.1698, 0.8247])


- tf.nn.sigmoid_cross_entropy_with_logits

对于这个,您可以应用 F.binary_cross_entropy_with_logits.

<预><代码>>>>F.binary_cross_entropy_with_logits(logits, labels, reduction='none')张量([[0.0181, 2.1269, 1.3133],[0.6931, 1.0067, 1.1133]])

相当于应用一个sigmoid然后应用负对数似然,将每个类视为一个二元分类任务:

<预><代码>>>>标签*-torch.log(torch.sigmoid(logits)) + (1-labels)*-torch.log(1-torch.sigmoid(logits))张量([[0.0181, 2.1269, 1.3133],[0.6931, 1.0067, 1.1133]])


已将 torch.nn.functional 导入为 F.

I found the post here. Here, we try to find an equivalence of tf.nn.softmax_cross_entropy_with_logits in PyTorch. The answer is still confusing to me.

Here is the Tensorflow 2 code

import tensorflow as tf
import numpy as np

# here we assume 2 batch size with 5 classes

preds = np.array([[.4, 0, 0, 0.6, 0], [.8, 0, 0, 0.2, 0]])
labels = np.array([[0, 0, 0, 1.0, 0], [1.0, 0, 0, 0, 0]])


tf_preds = tf.convert_to_tensor(preds, dtype=tf.float32)
tf_labels = tf.convert_to_tensor(labels, dtype=tf.float32)

loss = tf.nn.softmax_cross_entropy_with_logits(logits=tf_preds, labels=tf_labels)

It give me the loss as

<tf.Tensor: shape=(2,), dtype=float32, numpy=array([1.2427604, 1.0636061], dtype=float32)>

Here is the PyTorch code

import torch
import numpy as np

preds = np.array([[.4, 0, 0, 0.6, 0], [.8, 0, 0, 0.2, 0]])
labels = np.array([[0, 0, 0, 1.0, 0], [1.0, 0, 0, 0, 0]])


torch_preds = torch.tensor(preds).float()
torch_labels = torch.tensor(labels).float()

loss = torch.nn.functional.cross_entropy(torch_preds, torch_labels)

However, it raises:

RuntimeError: 1D target tensor expected, multi-target not supported

It seems that the problem is still unsolved. How to implement tf.nn.softmax_cross_entropy_with_logits in PyTorch?

What about tf.nn.sigmoid_cross_entropy_with_logits?

解决方案

- tf.nn.softmax_cross_entropy_with_logits

Edit: This is actually not equivalent to F.cross_entropy. The latter can only handle the single-class classification setting. Not the more general case of multi-class classification, whereby the label can be comprised of multiple classes. Indeed, F.cross_entropy takes a unique class id as target (per instance), not a probability distribution over classes as tf.nn.softmax_cross_entropy_with_logits can expect to receive.

>>> logits = torch.tensor([[4.0, 2.0, 1.0], [0.0, 5.0, 1.0]])
>>> labels = torch.tensor([[1.0, 0.0, 0.0], [0.0, 0.8, 0.2]])

In order to get the desired result apply a log-softmax to your logits then take the negative log-likelihood:

>>> -torch.sum(F.log_softmax(logits, dim=1) * labels, dim=1)
tensor([0.1698, 0.8247])


- tf.nn.sigmoid_cross_entropy_with_logits

For this one you can apply F.binary_cross_entropy_with_logits.

>>> F.binary_cross_entropy_with_logits(logits, labels, reduction='none')
tensor([[0.0181, 2.1269, 1.3133],
        [0.6931, 1.0067, 1.1133]])

It is equivalent to applying a sigmoid then the negative log-likelihood, considering each class as a binary classification task:

>>> labels*-torch.log(torch.sigmoid(logits)) + (1-labels)*-torch.log(1-torch.sigmoid(logits))
tensor([[0.0181, 2.1269, 1.3133],
        [0.6931, 1.0067, 1.1133]])


having imported torch.nn.functional as F.

这篇关于PyTorch 相当于 tf.nn.softmax_cross_entropy_with_logits 和 tf.nn.sigmoid_cross_entropy_with_logits的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆