二元交叉熵与具有 2 个类别的分类交叉熵 [英] Binary cross entropy Vs categorical cross entropy with 2 classes

查看:21
本文介绍了二元交叉熵与具有 2 个类别的分类交叉熵的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在考虑将输入分类为 2 个类别之一的问题时,我看到的 99% 的示例使用具有单个输出和 sigmoid 的 NN 作为激活,然后是二元交叉熵损失.我想到的另一个选择是让最后一层产生 2 个输出并使用 C=2 类的分类交叉熵,但我从未在任何示例中看到它.有什么原因吗?

When considering the problem of classifying an input to one of 2 classes, 99% of the examples I saw used a NN with a single output and sigmoid as their activation followed by a binary cross-entropy loss. Another option that I thought of is having the last layer produce 2 outputs and use a categorical cross-entropy with C=2 classes, but I never saw it in any example. Is there any reason for that?

谢谢

推荐答案

如果你在两个输出网络之上使用 softmax 你会得到一个数学的输出相当于在顶部使用带有 sigmoid 的单个输出.
算一算,你就会明白.

If you are using softmax on top of the two output network you get an output that is mathematically equivalent to using a single output with sigmoid on top.
Do the math and you'll see.

在实践中,根据我的经验,如果您查看两个输出网络的原始logits"(在 softmax 之前),您会发现一个恰好是另一个的负数.这是梯度将每个神经元完全拉向相反方向的结果.

In practice, from my experience, if you look at the raw "logits" of the two outputs net (before softmax) you'll see that one is exactly the negative of the other. This is a result of the gradients pulling exactly in the opposite direction each neuron.

因此,由于两种方法是等价的,单输出配置的参数较少,需要的计算量也较少,因此使用带有 sigmoid ob top 的单输出更有利.

Therefore, since both approaches are equivalent, the single output configuration has less parameters and requires less computations, thus it is more advantageous to use a single output with a sigmoid ob top.

这篇关于二元交叉熵与具有 2 个类别的分类交叉熵的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆